Monday, December 31, 2007

The Limitations of Worker-Oriented Job Analysis

Across all worker-oriented job analysis tools a common set of serious limitations exist, especially in rating the items:

  • By removing company-specific focus these tools are blind to the reality of a company’s culture and the industry and market within which the company operates and competes. These tools are also US-centric in their focus. In fact, the non-US-based employees used to originally validate one of the competency models were 80% American expatriates.
  • None of the analysis tools looks across all employees from exempt down through non-exempt positions. Therefore no comparative data can be created to help manage employees both in their current and future positions. In the case of the management-specific tools, many of the items refer to direct reports. If an employee does not have direct reports then they are rated low in every one of those items regardless of their actual skill. A company that wants a full-service solution will need to use at least two of the listed tools, none of which clearly overlap between levels.
  • Many of the work behaviors being measured are actually compound lists. For example, the definitions of several of the competencies in one of the competency models cited refer to the same interpersonal skill. If an employee is limited in that one area it has a dramatically negative impact on the evaluation across each of the items that include that single skill.
  • Each tool is based upon assumptions that certain interpersonal skills and technical skills always go together. Without clearly separating interpersonal from technical skills, the skills that truly differentiate performance are only inferred, but not valid predictors. The vendors’ assumptions are based on their own self-serving research, not on independent research on predictive validity. These theories are also founded on an ideal instead of being grounded in business reality.
  • These tools are not updated as business conditions and markets shift. Because of the initial investment of time and energy spent in creating each tool and the pride in authorship, these tools look and feel the same today as they did when they were created ten to twenty years ago. When they are ‘re-validated’ the authors simply review the data collected on the existing items without looking outside of their own models for additional items that should be added to reflect current global business conditions.
  • The items are not consistently scored the same way by each rater. Either because several of the numerical anchors are not clearly defined (e.g. you are supposed to rate the job on a scale from 1 to 5 and 1, 3, and 5 have text descriptions, but not 2 or 4), which results in two persons meaning different things by assigning the same number, or because the description of the item has one of the problems mentioned above (i.e. the rater did not understand the language used in the tool, the person being rated had no direct reports, and/or the same skill was mentioned across several items). This variability has been raised in court when employees who feel they were adversely impacted by a decision challenged the validity of the tool that was used to make that decision.
  • The focus of the tool has been on what the employee is doing instead of what the employee should be doing. In the case of the very first analysis tools, that was evident when I/O psychologists defined the importance of an activity based on the amount of time spent doing that activity during a normal workday. With the more recent competency models, raters decide what is important based on his/her own perceptions. Consultants who use the competency models may use some benchmark research, but much of that research suffers from the same problems first mentioned in this list (not industry or culture specific). Finally, this is a process problem because no analysis should be done without considering the strategic objectives of the organization and the position’s objectives that tie back to that strategy. Unfortunately, as that reality is being addressed today, the results are still skewed in favor of the status quo because even though incumbents are superficially defining importance based on objectives, actual results consistently indicate that they are only creating profiles that match the skills that they already have instead of the skills that they should have.

No comments: