Select Page

instrument_design_…pptx

questionnaire_development_assignment.docx

Don't use plagiarized sources. Get Your Custom Essay on
Just from \$10/Page

Unformatted Attachment Preview

Target Population, Sampling Frame and
Study Sample

Target population includes all the people in a defined group of interest.

A study sample is a subset of a population chosen to represent the target
population.

Sampling frame is a list of all the people in the target population who are
available to be selected for a study sample.

The method of choosing the study sample is called sampling method.

There are generally two types of sampling methods, probability sampling and
nonprobability sampling.
Probability Sampling

In probability sampling, each member of the target population has a known
chance of being selected for the study sample.

Probability sampling enables the evaluator to generalize the result from the
sample to the population. (what types of validity is this?).

Grant more weight to people who has smaller chance of being selected.
Probability Sampling Methods

Simple random sample:

All members of the target population are randomly chosen into the sample.
Systematic random sample:

All members of the target population are randomly ordered

The first unit (person) is chosen randomly from the sampling frame.

Assign a constant A to choose every Ath person after the first person, then return to
the top after the end of the list.
Stratified random sample:

The target population is sorted into distinct categories (race, gender, age groups)
and placed into independent subpopulations (strata).

Participants are then randomly selected from each of the subpopulations.

This method ensures the sample would include people in every strata.
Probability Sampling Methods

Cluster Sample:

The target population is divided into naturally occurring groupings (clusters).

Using random selection, a preset number of clusters is chosen for the sample.

Unit of randomization is groups rather than individuals.
Multistage sample:

Involves two or more sampling stages listed above.

Sampling units selected in the first stage (usually cluster sample) are called
Primary Sampling Units (PSUs).
Nonprobability Sample

In a nonprobability sample, the chance of each member of the target
population being selected for the evaluation sample is not known.

Convenience sample selects participants from the portion of the target population
that is easily approachable

Quota sample divides the target population into subgroups, and then selects
specified number of participants in each subgroup
The chance of being selected in unknown, so no weight can be assigned to
individuals.
Definition: Instrument, Item,
Measurement, and Variable

An instrument/scale is a tool to measure a concept/construct by reducing
multiple indicators of the concept/construct into one variable.

Items could be questions from a questionnaire, which consists of a series
of questions for the purpose of gathering information from respondents.

Items can also be collected from direct observations, or other data collection
tools and technologies.

Item and instrument are both measurement. An instrument measures a concept or
construct; an item measures an indicator.

The value of a measurement can vary in:

1. Different persons, or

2. Different observations on the same person,

So we use a variable to contain every persons’ measurement value in an observation.
Example
Calcium Knowledge Calcium Knowledge
Participation in Score Before Nutrition Score After Nutrition
Nutrition Health Health Education
Health Education
Education Program Program
Program
Student 1
Yes
2
5
Student 2
No
2
2
Student 3
Yes
3
5
Calcium Knowledge Score is calculated on 5 questions asking whether a
specific food contains a high amount of Calcium. A correct answer scores 1, a
wrong answer scores 0. Then we add five scores together to get a Calcium
Knowledge Score (0-5).
12 Steps for Developing An Instrument
1. Define the purpose of evaluation
2. Review existing instruments
3. Identify objects of interests
4. Constitutively define each object of interest (concept–construct)
5. Operationally define each object of interest (indicator-variable)
6. Choose a scale for measurement
7. Develop items
8. Prepare the draft directions, scoring and layout
10. Evaluation of validities
11. Evaluation of reliabilities
12. Conduct a pilot test
Define the Purpose of Evaluation

The usual objectives in health education are to change participants’
knowledge, attitude, behavior, and health status.

What’s the purpose for the evaluation of health education?

To determine whether participants changed their knowledge, attitude, and
behavior (impact evaluation).

To determine whether participants changed their health status (outcome
evaluation).

To determine whether the health education program was implemented
appropriately during the process (process evaluation).
Review Existing Instruments Related to
the Purpose

Sources to find existing instruments:

Method section in journal articles

Publicly available census and national surveys

Carefully study the purpose and actual items in the existing instrument.

If you find an instrument that matches the purpose of your evaluation, then
you can use that instrument and stop further steps.

If you are not able to find an instrument that meets your purpose of
evaluation, proceed to the next step.
Example of Existing Instrument

NIOSH Management Commitment to Safety Scale (DeJoy, Murphy, & Gershon,
1995) was adopted to measure the safety climate in the organization.

The NIOSH Management Commitment to Safety Scale consists of four items
asking respondents to rate their level of agreement: Strongly Disagree (1),
Disagree (2), Agree (3), and Strongly Agree (4) on each of the four statements:

“The safety of workers is a high priority to management where I work”

“There are no significant compromises or shortcuts taken when worker safety is at
stake”

“Where I work, employees and management work together to ensure the safest
possible working conditions”

“The safety and health conditions where I work are good”

The scores of the four items and added and then divided by the number of
items have values.
Identify Objects of Interest

Object of Interest: Health belief
Sharma, Manoj. Measurement and Evaluation for Health Educators.
Jones & Bartlett Learning, 20121101. VitalBook file.

Constitutively Define Each Object of
Interest (Concept–Construct)
Perceived Control of Physical
Activity: A person’s self-efficacy
Antawati, D. I. (2017). PARENT ROLE IN PROMOTING CHILDREN’S ENTREPRENEURSHIP INTENTION. ADRI
International Journal Of Marketing and Entrepreneurship, 1(1).
Operationally Define Selected Objects of
Interest (Indicator–Variable)

The construct of perceived control
of physical activity would be
operationally defined as the ability
jog, run, walk, swim and dance.
Choose the Scale of Measurement

The rating of the ability jog will be collected from a 5-point
measurement: Can’t do at all (1), Very difficult (2), A little bit difficult
(3), Somehow difficult (4), Not difficult at all (5). So as to the ability to
run, walk, swim and dance.

The ratings on five items will be summed into one score. The score of
self-efficacy about physical activity will range from 5 to 25, which is a
interval(numerical) measurement.
Develop Items

General Principles of Writing Questions:

1. Write clear items

2. Stay away from jargon, multiple syllable words and abbreviations

3. Keep to a single concept (avoid “double barrels”)

4. Keep items short

5. Avoid using negatives

6. Try not to introduce bias

7. Keep language consistent with the concepts being assessed

8. Frame questions to obtain complete answers
Prepare a Draft with Directions, Scoring,
and Layout

Directions should be optimized as much as possible:

They should be neither too vague nor too detailed, and should not be confusing to the

The directions should clearly describe the expectation to the respondents.

Sometimes directions can include an example of how to mark the responses.
The instrument should have clear guidance about scoring:

How each item is be measured and scored?

How is the overall scale calculated?

What is the range of the scale?

What high and low scores mean?
Prepare a Draft with Directions, Scoring,
and Layout

The instrument should have a good layout:

For instruments that are to be administered as paper and pencil tests,
there should be enough white space so that the instrument does not
appear cluttered.

The font size should be large enough for respondents to see clearly.

Generally speaking, the demographic information should be placed at the
end and not in the beginning.

For instruments to be administered electronically (e.g. Survey Monkey)

Ideally no more than five questions should be displayed on a single screen.

Provide an indicator as to how much questionnaire remains after each screen.

A respondent should be able to revisit questions answered before.

Microsoft Word has build-in function to assess readability.

On Mac: Click on the Word menu > Preferences > Spelling and Grammar

On PC: Click on the File menu > Options > Proofing tab

Select the Show readability statistics check box

Check Spelling & Grammar

After Word finishes checking spelling and grammar, it displays readability
statistics

Provides a score using a 100 point scale

Higher score means easier to understand

A good score is between 60-70 points

Assigns a grade level for the document

The lower the grade level, the easier the document is to understand

Both scores are determined by:

ASL = average sentence length (the number of words divided by the number of
sentences)

ASW = average number of syllables per word (the number of syllables divided by
the number of words)
Evaluation of Validities and Reliabilities
Evaluation of Validities

Face validity is the determination as to whether the instrument “looks
like” what it is supposed to measure.

Content validity is the extent to which an instrument has covered all
meanings within a concept, under the context of your evaluation.

Construct validity is the degree to which an instrument relates to other
variables as expected within a system of theoretical relationships.

E.g. Measuring SES using only personal income (NOT good!)
E.g. IQ and GPA
Criterion validity is the extent of convergence with a criterion or another
commonly used instrument that measures the same concept. In some
cases, there is a golden standard to measure a concept, so we can test
our instrument against that gold standard.

E.g. student performance and SAT
22
Evaluation of Reliabilities

Internal consistency reliability measures the extent to which items in an
instrument are related with each other

Can only be calculated if the scale has multiple items

Cronbach’s alpha is computed by taking the mean of the individual item-to-item
correlations and adjusting for the total items.

α = Nρ / [1+ ρ(N – 1)]

α = Cronbach’s alpha; N = Number of items; ρ = Mean inter-item correlation

The value of Cronbach’s alpha ranges from 0 to 1, with values closer to 1 indicative
of higher internal consistency.

An acceptable level for scales is generally considered to be equal to or over
0.70 (Carmines & Zeller, 1979; Nunnally & Bernstein, 1994).
Evaluation of Reliabilities

Test-retest reliability is the extent of association between two
observations taken over time.

The interval between first measurement and second measurement should
not be too small or too large, an interval of 2 weeks is recommended.
(Nunnally and Bernstein, 1994).

Then the correlation coefficient is calculated between the first time and
the second time. (Person correlation coefficients or Spearman correlation
coefficients)
Pilot Test

Pilot test within a small sample of the target population for the:

Comprehension

Time to complete the instrument

Feedback for improvement
Resources for Questionnaire
Development

http://www.indiana.edu/~educy520/sec5982/week_3/qu
estionnaire_development_frary.pdf

https://stacks.cdc.gov/view/cdc/11674
Exam 1 and Instrument Design
Assignment
HLTH 432 Questionnaire Development Assignment Worksheet
Name: __________________________________________________
Date:_______________
Instructions:
1. Review the scenario in Part A and develop an instrument with 3-5 items to determine
whether the program objective is achieved.
2. In order to finish this assignment, you need to research ways to improve home
environment for the prevention of fall injuries.
3. Follow 12 steps mentioned in the PowerPoint (page 10) for instrument development, and
complete the table in Part B.
4. Even if you are able to find an existing instrument in step 2, proceed to the remaining
steps to develop your own instrument anyway.
5. Skip step 11 and only describe what needs to be done in step 11 in the table.
6. Find a classmate to do expert review and pilot test for your instrument. Provide your
classmate’s opinions on face validity, content validity, and pilot test feedback in Part C.
7. Add the final version of your instrument with directions, scoring rules, and the result of
Part A: Scenario
You are the Director of the Department of Aging in Baltimore County. You conducted a fall
injury prevention education and outreach program for older adults in your community. The
program objective was to promote a safer home environment to prevent fall injuries among older
adults. The Centers for Disease Control and Prevention (CDC) funded this program and now
wants the Department of Aging to evaluate the program.
Part B: Complete the Table
#
1
2
3
4
5
6
7
8
9
10
11
12
Instrument Design Step
Name
Describe what you did in this step or what needs to
be done
Part C: Add face validity and content validity review opinions, and pilot test feedback
Signature of Classmate: __________________________________ Date: _______________
Part D: Add instrument with directions, scoring rules, and the result of readability test
2