|Research demonstrations are
essentially hybrids of research and service projects. In a research
demonstration, scientific design principles are used to establish the effectiveness
of a treatment or treatment approach in a real world setting. In recent
years, this approach has been promoted as a way of gaining external validity
that may be lacking when medical treatments are tested in a very controlled
and rarefied an environment, or tested with a restricted population.
Often the treatment approach being studied is multifactorial, including
not only medical procedures and pharmacological agents, but also psychosocial
interventions and new ways of making services available and accessible.
The design strategies in a research demonstration are
very much the same as in more conventional medical research. Usually,
a research demonstration entails randomization to groups, with a control
or comparison group receiving standard care, and the experimental group
receiving an innovative treatment. This treatment may in fact be comprised
of standard services that have been reorganized or enhanced.
Some research demonstrations are observational, in which
groups of subjects are followed prospectively as an innovative treatment
approach is introduced. As with more conventional medical research, this
approach does not allow the investigator to make strong claims about causality.
However, it is often the only way to study treatments for which randomization
is not feasible or ethical.
Although the design aspects of research demonstrations
are familiar, investigators often have difficulty conceptualizing both
the treatments and outcomes. Especially when the treatment of interest
is multifactorial or a reorganization of existing services it may be difficult
to define the treatment, its components or its intensity, or to identify
which of many possible outcome measures are the important ones. In this
way, the challenges of a research demonstration are similar to those of
evaluating the effectiveness of a service project.
In recent years, funders have required applicants for
service projects to include well-developed plans for stringently evaluating
the project's effectiveness. The methods that have been developed to do
this can be adapted for research demonstrations as well.
"A Guide to Planning and Evaluating Performance", a document
published by the Maternal and
Child Health Bureau of the Health
Resources and Services Administration, outlines the steps to designing
a service project evaluation. Here is a summary of this discussion:
The first important step is to define the targeted
health problem.Typically, the problem might
be stated in one of two ways:
As a health
systems problem, related to the delivery of
health care, in which the problem is stated in terms of access, quality
or costs; or
As a health
status problem, which is an undesirable condition
related to death, disability or disease.
Clearly, systems and status problems are related,
with systems problems usually conceptualized as contributing to status
problems. In general, health status problems are easier to measure objectively
and to tie to discrete outcomes. In other words, infant mortality is easier
to define and measure than is a lack of access to prenatal care. Also,
changing the status problem is the ultimate goal of any new treatment program.
Establishing an innovative way to deliver prenatal care would have as its
ultimate goal reducing infant mortality, not simply increasing the number
of prenatal visits.
Once the problem has been defined, you should be able
to state the goal of your research demonstration. A
statement is broad and directly related to
the defined problem. It is usually a long-term end point, and may not be
stated in measurable terms. The goal is the ultimate compass for your project.
For every activity you plan, you should be able to answer the question:
How does this help meet the goal?
are more immediate measurable conditions that you expect to reach by an
identifiable date. They should be clearly related to the goal, but also
clearly imply what activity will be undertaken to reach them. Objectives
should be realistic and achievable, and within your capacity and resources
to achieve. Your objectives should also point to the outcome variables
you will rely upon to say whether the demonstration was a success.
The distinction between systems and status problems is
reflected in the difference between process and outcome evaluation. Process
evaluation measures activities -- number of appointments made, number of
telephone contacts -- whereas outcome evaluation measures the expected
results -- a decrease in infant mortality.
often are not identical to the targeted health status problem. They may
be intermediate results or proximal determinants of the health status problem.
For example, when infant mortality is the health status problem, birth
weight might be the outcome measure of interest. This is because birth
weight is thought to determine infant mortality to a large extent, and
at the same time is a source of more detailed information for a broader
range of the population -- even in an area with a high infant mortality
rate the event of infant death is relatively rare, whereas every baby has
a birth weight.
In designing your evaluation you may wish to use this
kind of intermediate result as your outcome measure. This is especially
true when the ultimate health status outcome is rare or otherwise hard
to measure, and the intermediate outcome gives a detailed picture of the
processes leading to it. The major proviso is to be sure that there is
scientific theory and knowledge to substantiate the relationship between
the intermediate outcome and the overall health status problem.
Even when you have a sharply focused outcome variable
in mind, often you will also measure contributing factors, including the
psychosocial context and health systems problems. This is particularly
the case with the kinds of complex health status problems, such as substance
abuse, infant mortality or mental illness, which are the usual targets
of research demonstrations. In order to track the impact both of systems
problems and of your intended solutions, you will want to include process
evaluations every step of the way.
Once you have defined your problem, your goal, your objectives
and your outcome measures, it is straightforward to plan the activities,
time line and budget of the project. In this, as in every other aspect
of research, you must be as clear and specific as possible.
Here is an example of goals and objectives adapted from
"A Guide to Planning and Evaluating Performance", showing the level
of specificity required for each:
|Health status problem:
High infant mortality.
Reduce infant mortality
Reduce infant mortality rate to 11.0 deaths
per thousand by 2010.
By 2002, the percentage of low birth weight
babies born to residents of the service area should be less than 7.0%.
By the end of the grant year, increase the
number of women in the service area who receive prenatal care in the first
trimester by 20% over the number who received such services during the