Improving Talent Management: Demanding the Best of Your Selection Procedures
By James T. Stodd, SPHR
Holiday Greetings!
This is the season of “shopping”. Hopefully “peace on earth and good will toward men” will follow shortly, but right now it’s all about the “shopping”!
This season of shopping reminds me that many of you are shoppers and consumers of products (and tools) designed to help you screen and select the best candidates for hire. Or perhaps you are just now considering using some of these tools to improve your company’s performance. Either way, I’d like to use this article to provide some useful “tips” when considering tests, inventories, assessments, profiles and other “tools” designed to help you make better personnel selection decisions.
Before getting into the tips and recommendations, let me start with a brief story that will help illustrate the importance of this topic and why I’m bothering to write about it at this time.
The “Good” Experience
Like you, on occasion I meet with (or am sought out by) people marketing pre-employment selection tools and instruments. Several months ago I had the pleasure of meeting with such a gentleman to listen to his story about the instruments he was promoting to business organizations. He was a very sharp and articulate former executive who was quite passionate about what he had to offer. So I listened intent-fully to his “discussion points” about how wonderful his instruments were as well as the qualifications and brilliance of the test developers. Then somewhere about half-way through his “pitch” I interrupted and started asking technical questions about the quality of the instruments and the research behind them. To my surprise he answered those “technical questions” very well…like he was anticipating them…and I was quite impressed, particularly since the gentleman did not have any formal education in industrial psychology or human resource management. Then, I was doubly impressed (actually more like “floored”) when he pulled out the technical manual (about an inch thick) and offered it to me to read and digest. So, over the next couple of weeks, I did just that!
Upon reviewing the technical manual I found the instruments he was promoting were supported by years of ongoing research, development and continuous improvement, all of which collectively demonstrated the quality, integrity and usefulness of the instruments in a wide variety of applications. Moreover, the technical manual served as a testament to the efforts of the test developer (a corporation), not only regarding the soundness of its original instruments, but their commitment to continuously improve upon their instruments as well. Needless to say, during our second meeting I complimented him for how well he handled my technical questions, complimented his instruments as well as the folks that developed them, and offered to help in any way that I could in making his business development efforts successful.
Now the reason I’m telling you this story is that I have found the above to be very “atypical” when dealing with many vendors and test developers. Let me explain…
The “Not-so-good” Experience
Like I said, over the years a number of folks have approached me regarding their tests, measures and instruments. In most of those cases the person promoting the instrument, and singing its praises to the “high heavens”, would shrink in their seats when I’d start asking the hard, technical questions. That always signaled to me that they have very little understanding of the real properties and/or usefulness of the tests they are so passionately trying to promote! One such perplexed sales person could only respond by offering to “put me in touch” with their research department, but in the end the promised research reports were weeks in coming and when they arrived the evidence supporting the quality of the instruments was very weak. On another occasion, the business developer (aka, “sales guy”) even tried to discredit the validity of my technical questions. Rather, he argued that his company has over 20,000 happy subscribers, and that 20,000 people can’t be wrong! Well, I don’t know if his company has 20,000 happy subscribers or not, but he never did give me an answer to my technical questions about the tests or even suggest an alternative source for getting that information. Needless to say, I wasn’t favorably impressed!
Some have estimated that there are as many as 50,000 tests, measures and tools on the market today that are promoted and sold with the purpose of helping employers make better personnel decisionsi. That’s a lot! With that number of options, and more being developed each day, selecting the best tests, measures and tools can be a daunting task. And, as my stories indicate, it’s a “buyer bewares” world even when shopping for personnel selection tools. Now, don’t get me wrong, I’m a supporter of tests, assessments, inventories, profiles, etc., that are standardized, objective and will help you in making difficult personnel selection decisions. I’m just offering a word of “caution” and some advice so you too can be an astute shopper. Here are the things to look for when considering a test, inventory or other personnel selection tool:
Reliability
All of these tools, whether they are called psychological tests, competency assessments, inventories, profiles, or any of the other terms used to describe them, are tools used to measure a human attribute or set of attributes. The key word is “measure”, and like any good yardstick or thermometer, you want to make sure the instrument is reliable, consistent, will result in the same outcome (i.e., measurement) when applied multiple times to the same person, and will fairly allow you to compare one candidate to another on that attribute. That’s what “reliability” is about…standardization and consistency…and all “professional” test developers are very conscientious about making sure their instruments meet certain reliability standards!
Validity
Personnel selection tools are not only about “measuring”, they are also about “predicting” who will do well on the job, and who will not. If an instrument does not help you “predict” it’s not worth using (that includes your pre-employment interviews)! Moreover, using a tool that is not “predictive” is not only useless; it may to put you smack-dab in the path of costly litigation.
There are a couple of ways to assess the “validity” of a personnel selection tool. The first applies to tools (like work samples, behaviorally-based interviews, and job knowledge tests) that actually “sample” the behavior, knowledge, or skills critical to doing well on the job. Tests that directly sample behavior and critical competencies are said to have “content validity”.
An example of “content validity” would be using a job-knowledge test with candidates for a “machinist” position that requires the candidate to demonstrate mastery of the same geometry, trigonometry and tooling concepts required by the job for which you are hiring. Another example would be using a test that measures proficiency in Microsoft Office when 90{56cd7e6aa1a9e8b37b474966a37e40db52ca317c7a8b7c79ab3d6ff71decf1c7} of the person’s work would have to be done using that software. In either case, it is reasonable to assume that if the candidate does well on the test, they are likely to do well on the job and visa versa.
The second type of validity involves situations where you believe a certain “attribute” or “characteristic” about a person (often called a “dimension”) is thought to be related to a person’s ability to do well on the job. This is often the case with measures of intelligence, aptitude, personality, job preferences or style. Since the test does not actually sample job behavior, or the knowledge, skills and abilities known to be directly related to job performance, we can only assume people with high scores on the particular “attribute” will do better than others. In this case you want an instrument where the test developers have demonstrated that their test actually predicts which folks will perform better than others on the job. This is called “predictive validity”.
An example of predictive validity would be using some measure of “emotional intelligence” to predict leadership effectiveness. Theory, and some research, argues that executives high on “emotional intelligence” make better leaders. But this is theory, not fact! You want a test of emotional intelligence (or some other attribute) that has been proven effective in predicting who will do well, or not.
Note: When talking with test developers/vendors, always ask them about the “reliability” and “validity” of their instruments. Reliability and validity are to tests like horsepower, torque, crash ratings and fuel efficiency are to cars. If they can’t respond quickly to your questions (like the first person in my story did), my advice would be to send them packing…they are likely wasting your time!
Job-Relatedness
Test developers and vendors are generally very passionate about their instruments and often believe they are useful in a wide variety of applications. However, just because a personnel selection tool has been shown to be a reliable and valid predictor in some situations does not mean that it will work in your setting! In scrutinizing and selecting your selection tools, make sure the test developer/vendor has demonstrated the tool is a “valid predictor” of performance on jobs that are either the same or very similar to yours.
Test Utility
Even when a test or other personnel selection tool is reliable, valid, and job-related to the kinds of jobs you have in your organization, it still may not be very useful to you! Selection tools, even really good ones, are only useful under certain conditions. These include the following:
1. You need to make sure each person you hire can demonstrate a certain level of “mastery” over a certain knowledge-base, skill-set or range of competencies. Society uses this approach when testing and licensing doctors, nurses, accountants, attorneys, electricians, plumbers, pilots and others for whom we need assurance that they “know their stuff”.
2. If not using a test to demonstrate “mastery” (as above), then you are likely comparing candidates to each other (or a set score) to help you determine who would be most likely to “succeed” or “perform the best” on the job. Either way, for the test to offer any advantage you must have the latitude of being “picky” in your hiring decisions. If your candidate pool doesn’t permit you to be “picky” about who you hire, then all the tests in the world won’t help you much! In that case I’d work on improving your recruitment efforts so that you can be “picky”, then talk about tests.
3. The “satisfactoriness” of your current workforce is another variable to consider. Everybody dreams of improving the capability and performance of their workforce, but the practical question is whether or not it’s really doable. If you don’t see yourself as being able to significantly improve productivity by selecting better candidates from the pool of applicants you’re getting, then your current selection tools and methods are probably sufficient. After all, if you believe you are already employing the best people available, then adding another step to the process (and cost), even if the test is really a good one, may not make a lot of sense from a business perspective.
Demonstrated Vendor Commitment
There are a number of companies out there today who are fairly new to the business of personnel selection. Many of these have gone into the market leveraging very robust technology platforms, online testing and report generating capability, or even expertise in other areas of human resource management. But beware, and ask the “technical questions”. Many of these companies offer products that look cool, measure “innovative-looking” dimensions, and offer quick and affordable solutions that appear insightful, if not common sense. However, when you look under the hood from a “psychometric” standpoint, there often isn’t much there! Image is not value, size is not quality, and speed is not validity. Be sure and compare the “solutions” of these newcomers to the tools offered by folks that have been in the testing business for years, and whose tools are supported by volumes of research. They are the “gold standard” of personnel selection!
i Warren Bobrow, PhD, Caveat Emptor, AllAboutPerformance Blog, September 17, 2012
About the Author
Jim Stodd is a Principal and Managing Director of JT Stodd & Associates. Jim has helped numerous clients develop the organizational architecture and infrastructure required to achieve their strategic visions and goals. In addition, he has assisted other organizations to build strategically-focused and highly successful human resource management programs by introducing forward- thinking approaches to talent management issues. Before starting an independent consulting practice in 2001, Jim spent more than 15 years in senior management positions where he was responsible for human resources, organization development and change management. In addition, he was associated with several leading professional service firms including Ernst & Young LLP, Hay Management Consultants, and First Transitions, Inc. Jim is a specialist in Strategic and Organizational Planning, Change Management and Human Resource Management. He currently teaches classes in those subjects at Louisiana State University and the University of Louisiana-Lafayette. Prior to that he taught at the University California-Irvine where he was a recipient of UCI’s “2010 Distinguished Instructor Award”.