Skip Ribbon Commands Skip to main content

Assembling Specifications for Job Analysis:

An Introduction to Best Practices



The term job analysis refers to procedures designed to obtain descriptive information about the tasks performed by professionals and/or the knowledge, skills, or abilities thought necessary to adequately perform those tasks. In more simple terms, Brannick and Levine (2002)1 , refer to job analysis as "discovering, understanding, and describing what people do at work" (p. 1). The specific type of information collected for a job analysis is determined by the purpose for which the information will be used.

Alternate names for job analysis include job task analysis, role delineation, competency study, practice analysis, role and function study, and body of knowledge study.


A job analysis provides validity evidence for employment-related tests, such as those used to hire or promote employees, or to grant a license or certification. A job analysis may also be performed to:

  • define a job domain
  • write a job description
  • create a guide for performance reviews
  • support selection and/or promotion criteria
  • assess training needs
  • determine compensation
  • develop credentialing criteria
  • plan organizational requirements

A job analysis is conducted in accordance with The Standards for Educational and Psychological Testing (1999) (The Standards), a comprehensive technical guide that provides criteria for the evaluation of tests, testing practices, and the effects of test use. It was developed jointly by the American Psychological Association (APA), the American Educational Research Association (AERA), and the National Council on Measurement in Education (NCME). The guidelines presented in The Standards, by professional consensus, have come to define the necessary components of quality testing. Consequently, a testing program that adheres to The Standards is more likely to be judged to be valid and defensible than one that does not.

As stated in Standard 14.14,

"The content domain to be covered by a credentialing test should be defined clearly and justified in terms of the importance of the content for credential-worthy performance in an occupation or profession. A rationale should be provided to support a claim that the knowledge or skills being assessed are required for credential-worthy performance in an occupation and are consistent with the purpose for which the licensing or licensure program was instituted…Some form of job or job analysis provides the primary basis for defining the content domain…" (p.161)2


A variety of methods are used to complete a job analysis: observation, interviews, literature review, focus groups, critical incident interviews, and survey. Below are descriptions of these processes.

OBSERVATION: A trained job analysis professional observes the practitioner in various work settings. Behaviors are recorded and frequency of tasks is analyzed.

INTERVIEWS: Incumbents in the job are interviewed about what they do. They may also be asked what they need to know in order to perform the tasks. This view can provide valuable information about professional practice that cannot be obtained adequately through general meetings and interviews (e.g., focus groups).

LITERATURE REVIEW: Academic journals, professional magazines, and other related materials are reviewed. Previously conducted job analyses are studied. An example of a source of information about occupations can be found in an international database available through the Internet. "The International Standard Classification of Occupations (ISCO) is a tool for organising jobs into a clearly defined set of groups according to the tasks and duties undertaken in the job."3 ISCO is maintained by the International Labour Organization, a specialized agency of the United Nations.

FOCUS GROUPS: Prometric uses focus groups that use a series of guided questions, Participants provide responses describing what they do and what they need to know in order to perform their job. This research technique is particularly well suited to gathering information about emerging practices – particularly those that may impact the profession in coming years. This approach is often used in situations where the population to be credentialed is small, such as in a company expecting to hire one or two individuals for a position or for a certification body with a limited membership.

CRITICAL INCIDENT INTERVIEWS: The primary focus of a critical-incident interview is the description of work-related problems and how they are resolved. The structure of the critical incidents interviews required that professionals reflect on the behaviors seen to be critical in the profession and in the performance of individuals whom they have supervised in order to fulfill their responsibilities. The interview guide is often provided to the interviewee prior to the interview and their responses. Common and unique solutions provided by the interviewees are analyzed.

SURVEY: The development of a job analysis survey is an iterative process. A survey provides quantitative and qualitative information from a large number of individuals. The use of surveys allows credentialing organizations to link the assessments to the job and is the recommended approach for standardized assessments for an occupation. The results of a survey with responses from experts in the field also increases the content validity of the survey.


Below are the recommended steps in the development of surveys and the analysis of the results.

Depending on the focus of the survey, background (demographic) questions, tasks, knowledge statements, and/or other components are drafted using previous job analyses, published literature, and/or subject-matter expertise.

A Task Force Committee, typically comprised of 12 to 15 subject-matter experts representative of the profession (e.g., geographic region; work setting; years of experience) is convened for a multi-day, in-person meeting to review and revise the preliminary list of survey components.

After the Task Force Meeting, a draft version of the survey is created. A survey typically consists of the following sections:

  1. Background Information (demographic questions)
  2. Tasks
  3. Knowledge/Skills
  4. Recommendations for Test Content Weights
  5. Write-In Comments

It is critical for the survey(s) to be clearly worded and comprehensive in content. This ensures that survey questions/content and results can be meaningfully interpreted. In addition, it is important that surveys can be completed in a realistic time frame thereby preventing "survey burnout". As a result of conducting quality-assurance reviews, a well-designed survey can promote high response rates. These QA reviews are listed below in chronological order.

  • Task Force Review: The draft of the survey is e-mailed to the Task Force Committees for review and comment. Comments typically consist of: 1) suggested additions to, deletions from, or clarification of the tasks and knowledge; 2) proposed revisions to the survey instructions and the rating scales; and 3) changes to the background information questionnaire and the write-in comments questions. A Web conference with the Task Force members is conducted to discuss the comments. Suggested changes are integrated into the survey instrument as appropriate.

  • Pilot Test: A small-scale pilot test of the survey is conducted with a small group of subject matter experts, not previously involved in survey development. The purpose of a pilot test is to determine whether survey content is clearly written and comprehensive. It also allows the survey researchers to collect and determine completion time of the survey. A Web conference with the Task Force members is conducted to discuss the pilot participant comments.

The survey is administered to a representative group of participants of sufficient size to ensure that the job analysis results are valid. Critical subgroup (e.g., geographic region; work setting; years of experience) responses are monitored during survey administration to determine the possibility of conducting separate data analyses.

Strategies for increasing the survey or critical subgroup response rate include posting articles/notices on Web sites and/or appropriate print/Internet-delivered media (magazines, newsletters, journals), or offering incentives (e.g., continuing education units, a gift certificate).

The invitation to participate with the survey URL is followed by at least two reminder notices.

At the close of the survey, the appropriate and reasonable analyses to be performed based on number of responses are determined. In general, these analyses include:

  • Descriptive statistics (frequency distributions) on the background (demographic) information provided by the respondents.
  • Descriptive statistics (means, standard deviations, and/or frequency distributions) for each of the survey components such as tasks or knowledge statements for the total group and important subgroups as appropriate.
  • Indices of agreement and/or analysis of variance (ANOVA), as appropriate, depending on the survey components and the number of responses.


As noted, the primary purpose of the survey is to confirm the important tasks and knowledge/skills to perform the job at an accepted level. However, other scales may be used in survey research, based on the purpose of the job analysis.

The primary scale for establishing content validity is importance. Below are typical rating scales used for importance of tasks and knowledge/skills.

How important is performance of the task in your current position?
Response choices: 0=Of no importance; 1=Of little importance; 2=Of moderate importance; 3=Important; 4=Very important

How important is the knowledge/skill in your current position?
Response choices: 0=Of no importance; 1=Of little importance; 2=Of moderate importance; 3=Important; 4=Very important

On occasion it is useful to obtain information from both incumbents in the field and their supervisors. In this situation, it is useful to identify those tasks that are performed by the incumbents and those that are part of a supervisor's role.

Indicate whether you perform or supervise/manage the task.
Response choices: 0=Neither perform nor supervise/manage the task; 1=Perform the task; 2=Supervise/manage the task; 3=Both perform and supervise/manage the task

In addition to importance ratings, it is useful to determine when the knowledge/skills are acquired.

When did you first acquire the knowledge/skill?
Response choices: 0=I have had no exposure to this knowledge/skill; 1=During my undergraduate education program; 2=During my graduate education program; 3=During the first year working; 4=During the second to third year working; 5=After the third year working

The responses for point of acquisition are tailored to meet the learning environment of the survey population. Below is an example of a different set of responses.

When should this knowledge/skill be predominately learned or attained?
Response choices: 0=Not required at all; 1=In an undergraduate university program; 2=During a recognized training program; 3=In on-the-job training prior to certification; 4=In on-the-job training post certification;5=In a continuing education program post certification

In order to evaluate survey responses, criteria for inclusion or exclusion of tasks and/or knowledge/skills must be established.

Since a major purpose of the survey is to ensure that only validated tasks and knowledge/skills are included in the development of test specifications, a criterion (cut point) for inclusion needs to be established.

A criterion that is typically used is a mean importance rating that represents the midpoint between moderately important and important. For the importance rating scale used across many studies, the value of this criterion is 2.50. As noted above, the importance scale ranges from 0=Of no importance to 4=Very important. This criterion is consistent with the intent of content validity, which is to measure only important tasks or knowledge/skills in the credentialing examination.


Definition of Pass, Borderline and Fail Categories for Task and Knowledge Mean Ratings




At or above 2.50


2.40 to 2.49


Less than 2.40


The primary purpose of many job analyses is to provide validity evidence for creating employment-related tests, such as those used to hire or promote employees, or to grant a license or certification. To facilitate the creation of these examinations, test specifications (also referred to as exam specifications, an examination blueprint, or a test outline) must be created. In order to establish the specifications, the results of the job analysis survey are presented to a committee comprised of 12 to 15 subject-matter experts representative of the profession (e.g., geographic region; work setting; years of experience). The committee should include a proportion of Task Force members and new subject-matter experts.

Survey responses are used by the committee to make informed recommendations about examination content. The following data are presented:

  • Responses to the background information survey questions
  • Ratings for task and knowledge/skills scales
  • Survey respondents' write-in comments

The committee recommends the content weighting (percentage of items) for the examination. After reviewing committee and survey data, the optimal percentage weights for each domain are determined. The test content weights may be used to guide further examination development activities, including item writing and examination assembly.


Job analysis takes a multi-method approach to identify the tasks needed to perform a job and the knowledge/skills needed to perform these tasks. Through this research, an organization is provided with an up-to-date perspective on roles and responsibilities of a profession based on empirical research to ensure that the organizational initiatives continue to be aligned with important current, emerging and future practices. This information also provides the basic for psychometrically sound and legally defensible examinations.

1 Brannick, M.T. & Levine, E.L. (2002). Job Analysis: Methods, Research, and Applications for Human Resource Management in the New Millennium. Thousand Oaks: Sage.

2 American Educational Research Association, American Psychological Association, National Council on Measurement in Education. (1999). The Standards for Educational and Psychological Testing. Washington, DC: American Psychological Association.

3 Retrieved on February 12, 2008.

Return to Test Efficiency and Legal Defensibility Page


Check back often for the latest Prometric news and announcements.