Formal Evaluation

There is a high level of expectation as to what questions will be raised and what data assessed when a formal evaluation is conducted. Evaluation research looks back at all that went into the policy or program, a complex task. Some key questions posed are:

  1. Was the policy adequately formulated? What were the goals? Was the underlying causal model (often unspecified, but may be glaringly revealed now) adequate?

  2. Was the implementation competent? Well organized? Effective? Timely? Coordinated? Well led?

  3. Was the budget adequate? Was the program cost effective? What were the units of goal indicator achieved per unit of budget? What are marginal returns: higher or lower? Is the program worth the expenditure?

  4. There should be a specific "client analysis": Who was helped? Identify and explain the client group. Were expectations of benefits met? Who is prepared to defend the program?

What is generally involved? How do we conduct such investigation? These steps are helpful:

  1. Specification: What are the goals? Criteria? Purposes? Upon what indicators is this policy, program, process to be evaluated? What is the bottom line?

  2. Measurement: What information do we need by which to assess the objectives specified in step 1, above? Recognize that a single anecdote can carry more weight than a bunch of careful data.

  3. Analysis: The use of data to draw conclusions. This can range from quantitative techniques, to comparative studies, and to carefully designed surveys. Care must be given when using opinion, impressions, and anecdotes.

  4. Recommendations: What should be done next? Terminate? Replicate? Amplify? Adjust? Cut or expand? Evaluation research inevitably is called upon to be highly prescriptive. Implementing changes is another matter, however.

To provide honest, objective assessment, the scientific method can be applied. The social world is a poor laboratory: the analyst cannot hold the variables constant and painstakingly manipulate them. Ethical and practical considerations intrude. The social world, like public policy itself, is a moving target, hard to pin down and to analyze carefully.

Interrupted time-series analysis provides an inexpensive method which gives preliminary results, but is notoriously fallible. The task built into the research design is to systematically track the impacts on a population which has had a particular intervention and compare the impact variables with another population which has not had the intervention. Is there a measurable difference in specified outcomes? Note the case of Connecticut police program on speeding enforcement. The program seemed effective in reducing fatal crashes until it was found that neighboring Massachusetts had the same results without such a program. The next hypothesis was that the weather was responsible for the decline in highway mortality.

divider line

The Public Policy Web
©by Wayne Hayes, Ph.D., ®ProfWork, July 15, 2001
whayes@ramapo.edu
November 10, 2002