On a recent flight, I spoke with a general manager for a manufacturing company. When I explained that I work for an assessment company, his eyes widened, and he excitedly said he had used an assessment for 10 years and that the assessment “had never gotten it wrong.”

When I asked what he meant, he said the assessment confirmed his perceptions of candidates from interviews. Intrigued, I asked how he had chosen the assessment from the wide variety available. He said he liked “the assessment guy.”

When I asked about the science behind the assessment, he indicated that he didn’t even realize there were other selection assessments out there and that he didn’t care much about the science as long as he liked “the assessment guy” with whom he worked.

His approach to using assessments is not uncommon. My seatmate was an engineer — someone taught to understand, use and value science. I have no doubt that his education covered statistical concepts such as correlation, reliability and validity. I’m equally sure he used those and other statistical methods in his work daily, because he builds highly technical, engineered products. Nonetheless, he didn’t seem to see the connection between using good measurement and high-quality human performance like he saw the connection between good measurement and the quality of his product’s performance.

Perhaps he, like many of us, thinks he is a good judge of character. He’s not. We’re not. If we were, the 40% rate of divorce for first marriages in the U.S. would be lower. We often fool ourselves into believing we can make highly accurate predictions about whether a candidate will be effective at a job. We can’t. We think people are relatively easy to understand. They aren’t.

Fortunately, there are brilliant people creating assessments that can predict outcomes like job performance, turnover and leadership potential. You don’t have to go too deep into the science to use them appropriately and make better decisions about talent. You just need to know what to look for.

There are only three concepts you need to understand to make good decisions about which assessments are right for your company and purpose.

1. Job-relatedness

Do you understand what the job really requires? What are the abilities, skills and characteristics that someone needs to do the job well?

Job relatedness is an important scientific and legal standard, because it helps determine whether an assessment measures what is important for success on the job. That standard prevents the use of race, gender or other irrelevant characteristics to make decisions about someone’s occupational future.

The process to establish job relatedness is job analysis. Any reputable assessment provider will have a job analysis process and will be able to explain how it works and how it supports the use of its assessments for specific purposes. If your assessment provider doesn’t have a well-documented job analysis process or can’t provide evidence that it is relevant for the purpose for which you are interested in using it, find another provider. You likely will end up in court with the one you have.

2. Reliability

Reliability is about whether the assessment measures in a consistent way. For example, imagine that you want to measure an adult’s height, but all you have is a rubber ruler that may stretch when you use it. You measure the person’s height multiple times and find a different height each time. Clearly, the person isn’t growing or shrinking. You have an unreliable, or inconsistent, measure of height.

The same simple idea applies to talent assessments. If assessments are measuring enduring characteristics, like extroversion, for example, then they ought to measure the same level of those characteristics again and again.

3. Validity

A valid measure is one that helps you make accurate predictions. For example, horoscopes are not valid measures of overall health, lifetime income or shoe size; we can’t make accurate predictions about those things based on whether we know a person is a Virgo or Libra.

Suppose an assessment to predict job performance asked the person to stack eight colored cards in order of his or her color preferences. There is no evidence that this assessment predicts job performance, measures personality or even color preference. The vendor may call it “gamified” or indicate that it uses neuropsychology. Those terms mean nothing in terms of prediction. Ask for documented scientific evidence that the assessment will help you make the predictions you need to make.

Let’s use a practical example to show how these three concepts matter. Assume you have a backache and you call your doctor. She prescribes a pill before asking any questions about your pain; in other words, she hasn’t yet established that the effect of the pill she is prescribing is related to what ails you. Next, she tells you that the pill sometimes works and sometimes doesn’t, so you shouldn’t be surprised if your pain doesn’t subside; the pill is unreliable. Finally, you ask whether there is published scientific evidence that this pill reduces the type of pain you are experiencing. She says no but goes on to say that she knows it will work; you should simply trust her.

Are you going to take those pills that are unrelated to your condition, are unreliable about producing results and have not been shown to help with your condition? If not, you shouldn’t choose a talent assessment that doesn’t meet the standards of job-relatedness, reliability and validity.

You may think you have neither the expertise nor the time to evaluate whether an assessment will be useful. After all, it looks like it measures personality or preferences or abilities. You may have completed it yourself and agreed with the results. However, choosing an assessment that appears to measure a construct without confirming job-relatedness, reliability and validity would be like taking a pill just because it looks like medicine.

No organization would invest in a new product or acquisition just because someone says he or she knows it will be a good investment. Organizations want a rigorous business case that shows an investment is likely to pay off. They need not have a lower standard for making predictions about people, and credible assessment vendors will welcome with glee questions about job-relatedness, reliability and validity.

Start asking questions about these three important assessment elements to help sort those that say their assessments are effective (hint: they all do) from those that can prove it. You will be able to eliminate a startling number of assessment providers by doing so. Any assessment that is not supported by written, understandable job-relatedness, reliability and validity information that is related to the use you have in mind should be eliminated from consideration immediately. Remember, assessment decisions affect people’s lives for better or worse. Please choose for better.