Standardized Test 

A study: HiringBranch vs standardized language tests

Standardized Language Tests have been the go-to tool for HR for decades. Why is HiringBranch's four-in-one assessment better?

We set out to dig deeper into the differences by putting HiringBranch to the test in this white paper.

We put a group of contact center candidates through both the HiringBranch four-in-one English assessment as well as an industry-leading standardized English language test. Then we compared the results.

Spoiler alert! The two tests selected different candidates for passing. The standardized language test ranked highly some candidates that the HiringBranch test did not recommend for hiring, and the differences were significant for job performance. The opposite was true too - the standardized language test overlooked some candidates who made the grade using HiringBranch.

Find out what the differences are (with examples) in this study.

Download case study

Do you want candidates who can pass a language test? Or candidates who can communicate with customers?


Language assessments are a part of every contact center recruiter’s toolkit. Whether it’s checking grammar or spoken fluency, the goal has been to ensure that an applicant has a basic proficiency level in English or another language.

As the industry automates, and customer expectations increase, the job of the customer service agent is becoming more complex. Is the standardized language test still the right tool to use? This paper looks at the differences between standardized language tests and more sophisticated communication assessments.

Three significant differences are found. The standardized language tests are lacking in communication skills, comprehension skills as well as accuracy in spoken fluency. This results in a different ranking of candidates. The paper provides examples of how a standardized language test can result in ‘false positives’ where the wrong candidate is passed, as well as ‘false negatives’ where a good candidate is overlooked. The paper also shows how a more sophisticated communication assessment can more accurately select for frontline customer service job performance.


Almost every contact center agent working today has passed a language test of one kind or another. But how well do those tests work? How often have language test results correlated with the candidates’ actual performance? Over one billion USD is spent by corporations on language testing every year. This represents a significant cost for Talent Acquisition departments. Are they getting value for the dollar?

Research shows that in offshore centers (outside North America and Europe), up to 20% of agents do not have the required language and communication skills to do their job effectively. 1

And across the industry, customer satisfaction levels are reported to have declined in 2020. According to data compiled by the CFI group, this year’s reading is the lowest score since the report was first issued and it represents the uphill climb many companies face in satisfying increasingly demanding consumers.

"In the world of self-service, customers now reserve only the toughest problems for the contact center agents," says Sheri Petras, CEO of CFI Group. "Many simple questions are now handled by the customer on the company website, saving the complicated, frustrating questions for contact center agents. Organizations must implement the systems and processes needed to provide effective online self serve tools to customers, while also ensuring that customers easily can reach an effective and empowered agent when needed.

"One of the biggest challenges facing the recruiting industry is to both fill seats AND provide an accurate screen. Even one poor hire who lacks language skills (a ‘false positive’) and has to be either trained, shifted into another position, or let go, represents a significant cost for an organization.

Similarly, in a competitive job market, no employer wants to pass over quality candidates who may fail a standard language test for the wrong reasons.

So why are the standard industry language tests selecting the wrong candidates?

Analysis & Solution

We looked closely at an industry-leading standardized language test and compared it with a language assessment designed specifically for frontline contact center employees. We found three significant differences between the two types of assessments.

The following tests were compared:

1. HiringBranch Pre-Hire Communication Assessment English (HB): The test consists of Reading, Listening, Writing and Speaking for Contact Centers
Duration: 45 minutes
2. Industry-leading standardized test for Speaking and Writing (ST): There are 2 tests - Speaking (15 minutes) and Writing (35 minutes)
Total Duration: 50 minutes

A sample of non-native English speakers took both the HB and ST tests and the results were compared. The results for Speaking, Writing, and Final Score for each test were correlated. While there was some correlation in Writing, there were lower correlations for Speaking and Final Score, which means that the tests selected different candidates for passing. The main reasons are outlined below.

Communication Skills

Standardized Language Tests (ST) focus on language proficiency only whereas HiringBranch (HB) adds another layer on top of that in which communication skills are assessed. Thus, HB does not only check how the candidate says things but also what they say. This is done based on real scenarios that imitate the candidate’s anticipated work, which enables HB to assess the candidates that are fit for the intended environment.

HB’s test not only checks language proficiency but also assesses the candidate’s real-world communication skills, like paying attention to details, acknowledging the customer’s concern, their ability to perform under pressure, using positive language, etc. To see how the incorporation of communication skills into a test affects its validity, let’s look at the following data:

Correlation Coefficient between Communication Skills and Test Scores:

Table 1 - The Pearson correlation coefficient between communication skills and test scores

Table 1 shows that there is a high correlation between the candidate’s communication skills and their test results, not only in comparison to HB but also in comparison to ST. Thus, communication skills are highly efficient in predicting both the candidate’s linguistic ability and their chances to succeed in their future work.

To demonstrate this, let’s look at a few specific outliers: For the following question, the candidate needs to show that they understood the customer’s issue as well as to assure the customer that they are going to help them.

The following response, of a candidate named Sam, is quite good. Sam rephrases the customer’s issue in a concise way that shows that Sam understood it. Then, Sam describes how the feature should work. Moreover, the sentence “It would help us resolve your issue better” conveys Sam’s commitment to helping the customer.

Now, let’s consider the following response to the same question written by candidate Mary:

Mary repeats the word “issue” twice but does not state what the issue is. Thus, we cannot know whether Mary understood the issue or not. The response includes a general statement that shows Mary’s willingness to help, but when Mary started writing the actual solution, Mary ran out of time and did not complete the task.

In this case, then, not only did Mary not fulfill the requirements of the task, but Mary was also unable to complete it within the four-minute deadline. Thus, from the perspective of skills like working under pressure and attention to detail, two crucial skills in Mary’s future work, Mary was unable to deliver.

Sam’s communication skills are better than Mary’s. Nonetheless, Mary was ranked quite high in the Standard’s test while she failed HB’s test. The opposite is the case with Sam.

Table 2 - Mary vs. Sam’s rank in percentile (10th percentile means only10% of candidates have lower scores.)

Similar differences in the ranking are found with other scenarios, both in writing questions and in speaking questions. In many cases, the standardized language test produces a different ranking than the HB assessment.

Spoken fluency

The second issue that accounts for the imperfect correlation between HB and ST speaking tests (and by extension between the final scores of both tests) is the reliability of the spoken fluency measurements in each of the tests.

To enable a fair comparison, the results were manually scored for spoken fluency and pronunciation separately by two people. Their manual scores were then averaged to better represent the candidate’s spoken fluency.

Correlation Coefficient between Fluency and Test Scores

Table 4 - The standard test’s correlation coefficient between the tests’ speaking scores and manually scored spoken fluency

As can be seen from Table 5, HB speaking test results correlate better with the candidates’ spoken fluency.

Fluency is an important component of ‘intelligibility’, or the ability for a listener to easily understand a speaker without having to ask for clarifications. It is separate from accent or pronunciation and is a key factor in how smoothly a spoken conversation can be conducted.

The HB assessment uses a variety of metrics that together score on intelligibility. The ST assessments take a more traditional approach. This leads to differences in how the candidates are ranked in the speaking results.

Listening and reading comprehension

As was noted above, HB dedicates part of its test for reading and listening comprehension while standardized language tests do not. Let’s examine the correlations between reading/listening and the final test scores:

Table 5 - The Pearson correlation coefficient between HB’s reading and listening score and final tests results

The reading and listening comprehension scores correlate with HB’s final test scores but due to the limited weight in the total scoring formula (to reflect the more important writing and speaking skills), their effect is limited as well. As for ST’s final score, HB’s reading and listening comprehension scores correlate negatively with it.

This means that not only does ST’s test NOT account for a candidate’s reading and listening abilities but that ignoring them leads to wrong predictions.

Thus, listening and reading comprehension is an important component in a test. Naturally, this is true regardless of the empirical results, as the ability to understand written and spoken content is essential to a candidate’s ability to succeed as a customer service agent.

The absence of a reading and listening comprehension component from ST’s test results in false positives. One of them is Mary, who was ranked last in this component in HB’s test. This was reflected also in Mary’s final score on HB’s test. ST’s test, in comparison, ranked Mary quite high. In another example (Kristine), the highest-scoring candidate in reading and listening ranked high in HB’s test but ranked in the 30th percentile in ST’s test. Finally, another candidate, (Oliver) did quite poorly in reading and listening, which affected Oliver’s final ranking in HB’s test. ST’s test, on the other hand, was unable to detect Oliver’s weakness in this respect and ranked Oliver first among all candidates.

Let’s look at all the example candidates and how they we reranked.

Differences in Candidate Rankings 
90th percentile is high, 10th is low

Table 6 - Mary, Kristine, Oliver and Sam rankings in percentile

As can be seen from Table 5, HB speaking test results correlate better with the candidates’ spoken fluency.

Fluency is an important component of ‘intelligibility’, or the ability for a listener to easily understand a speaker without having to ask for clarifications. It is separate from accent or pronunciation and is a key factor in how smoothly a spoken conversation can be conducted.

The HB assessment uses a variety of metrics that together score on intelligibility. The ST assessments take a more traditional approach. This leads to differences in how the candidates are ranked in the speaking results.

Further discussion

Why would the addition of communication skills and comprehension matter?

We know from research and surveys that specific communication skills correlate directly with agent performance, such as customer satisfaction, average contact handle time, reopen rate and customer satisfaction.

Industry surveys show that the top 3 skills contact center trainers look for (in addition to spoken and written fluency)are the ability to listen carefully and acknowledge, pay attention to detail and to reassure a customer that their needs will be looked after.2

Recent consumer behaviour research conducted on ‘concrete language’ or attention to detail, as well as listening or ‘attending to’ a customer, shows that even small changes in words used by agents have positive outcomes.

The field data suggest that increasing linguistic concreteness by one standard deviation improves customer satisfaction by 9% and actual spending by at least 13%. 3

This data reinforces the appropriateness of the HB assessment in targeting the real-world skill requirements of frontline customer service employees.


Despite the high correlation between candidates’ scores in HB and ST tests, we have shown three major differences between the two tests:

1. While both HB and ST evaluate linguistic proficiency, HB’s test adds a layer of assessment that accounts for the candidate’s communication skills. This layer contributes dramatically to the prediction of the candidate’s chances to succeed in their prospective work.
2. HB achieves more reliable results in scoring candidates’ spoken fluency.
3. HB is the only test that analyses the candidates’ reading and listening abilities, two crucial components in the candidates’ future work.

The multiple layers of assessment and test components enable HB to easily spot false positives, thus addressing one of the biggest challenges of the recruiting industry by saving time and money.

The HB assessment also addresses false negatives and ensures that the right candidates are not passed over.

Contact centers looking to hire for customer service excellence can consider a wider range of testing than what is available from standard language tests. Specifically, assessments that evaluate communication, intelligibility and comprehension skills will produce different results thana standard language test and the results will be more accurate for performance in the contact center work environment.

About HiringBranch

HiringBranch is an AI-powered hiring assessment driving down costs and improving performance for hiring teams around the world. HiringBranch evaluates not only what candidates say, but how they say it. Quick, accurate and authentic, it’s a soft skills and language assessment in-one-go. Serving hiring teams in retail, sales, health and IT of sizes 100 to 10000+, you too can hire confidently and effortlessly.

Learn how you can start your proof of concept.

Book a demo
1. Case Study Global Fortune 50 Client HiringBranch July2020
2. Survey of Workplace Communication Skills HiringBranch January 5, 2019
3. How Concrete Language Shapes Customer Satisfaction Journal of Consumer Research, ucaa038, Published July 18,2020
4. Candidates are passing your standardized language test but performing poorly. HiringBranch July 2020
Download case study

Ready to finally hire
with confidence?

HiringBranch is the only tool that replaces hours of screening and interviewing, intelligently.
Learn how you can start a pilot.

Book a demo
some of our happy customers