Public Policy

How Do We Improve the Process of Government Improvement?

March 18, 2015 3332580

DOL eval officeHave you read about past efforts by the United States federal government to improve its performance?  If so, you’ve encountered a jargon-heavy world, complete with an alphabet soup of initiatives:

  • Performance budgeting;
  • Planning-programming-budgeting systems (PPBS);
  • Management by objectives (MBO);
  • Zero-based budgeting (ZBB);
  • War on waste;
  • Total quality management (TQM);
  • New public management (NPM);
  • Government Performance and Results Act (GPRA);
  • Lean six sigma;
  • Program Assessment Rating Tool (PART);
  • “Rigorous” evaluation;
  • Stat movement;
  • High priority performance goals (HPPGs); and
  • GPRA Modernization Act (GPRAMA).

If you read through these bullets, you’ve reviewed the last 60 years of centrally directed efforts to improve the federal government’s performance.  Each initiative originated out of a sincere desire to use certain social science and business methods to address society’s problems. Too often, however, these efforts were conceived narrowly, driven by fad-fueled enthusiasm, or lacking in cumulative learning from past experience. Eventually, most were discarded. Many left uncertain legacies. It’s not clear that the efforts have improved the capacity of federal agencies to improve.

Groundhog Day—the Government Performance Version

This multi-decade series of efforts to improve government performance has been disjointed, to say the least. To a large extent, each iteration has drawn from a relatively narrow view and theory of how to improve. Under the thought leadership of central institutions including the Office of Management and Budget, the federal government has careened from an emphasis on performance measurement, to an emphasis on policy analysis tools, to belief in statistical approaches, to faith in goal-setting, to viewing randomized controlled trials as the preeminent form of program evaluation. History demonstrates that most efforts to improve the process of government improvement have suffered over time from a lack of coherence and continuity. Furthermore, a review of various literatures in the realm of government performance improvement suggests widespread disappointment with their success. Why? What’s going on, here?

In a forthcoming article from the American Journal of Evaluation, my co-author and I discuss what we believe to be part of the answer. In brief, what we might call the field of “government performance improvement” has suffered from tribal thinking and correspondingly narrow initiatives. Focusing on key terms that are prevalent now, performance measurement, data analytics, and program evaluation are all in the mix of efforts to produce evidence about public and nonprofit programs that can be used to improve public performance and enhance learning. Yet performance measurement, data analytics, and program evaluation have been treated as different tasks. Scholars and practitioners who focus on each of these approaches speak their own languages in their own circles. Even the diverse field of evaluation has been balkanized at the federal level in recent years. OMB has emphasized one kind of impact evaluation and tended to ignore (and even disparage and downplay) other types of summative and formative evaluation.

Recognizing and Overcoming Balkanization

This conceptual balkanization has been pervasive over time, both in practice and academic discourse. It also has had extensive ripple effects on institutions and the mind-sets of individuals who work in them. In our reading and experience, characterizing performance measurement and analytics as being distinct from evaluation has led to a persistent and pervasive separation among groups of people that has been costly in terms of both resources and organizational learning. In practice, an agency may contain an island or two of evaluators in their own shops.

In some agencies that have extensive administrative bureaucracies, there may be little emphasis on evaluation, per se, but instead shops that focus on analytics or operations research. Some agencies have also developed capacity in ‘‘policy analysis,’’ which oftentimes focuses on summative evaluation methods such as cost–benefit analysis and impact evaluation as preferred tools. In contrast, staff who are assigned the function of complying with the Government Performance and Results Act generally have focused on goal setting and performance measurement. Historically, the latter staff often have little training and experience in other evaluation and analytical methods. Each agency may have a unique constellation of these capacities with one-off success stories of integration, but typically these capacities have not worked together and in many cases have not been aware of each other.

Due in part to a sense of separateness among three groups of people—scholars and practitioners of (1) performance measurement, (2) the increasingly popular data analytics, and (3) the broad, multidisciplinary field of evaluation—a corresponding sense of separateness pervades among the constructs of measurement, analytics, and evaluation. However, we suggest that if performance measurement and data analytics were consistently viewed as parts of evaluation practice—parts that could benefit from insights from other evaluation practitioners—public and nonprofit organizations would be better positioned to build intellectual and organizational capacities to integrate methods and thereby better learn, improve, and be accountable.

Moving Toward a More Strategic Approach to Evaluation Within Organizations

If the goal of evaluation (including measurement and analytics) is to support achievement of an organization’s mission, we argue that the evaluation function within an organization should be conceived as a unified, interrelated, and coordinated ‘‘mission-support function,’’ much like other mission support functions such as human resources, finance, and information technology. Regardless of how evaluation related mission support is organized, distributed, and coordinated in a particular organization—which may vary reflecting organizational history, culture and stakeholder interests—this function could support general management of an agency, program, or policy initiative. Evaluation-related mission support could work closely along with other mission-support functions, albeit with an expectation that analysis, measurement, and other evaluation approaches will be valued as a genuine source of mission support rather than neglected or implemented piecemeal.

My co-author does not advocate for policy options, given the nature of his job. Consequently, I am speaking for myself when I argue that the conceptual arguments in our article have implications that call for action. Among other things federal agency leaders should:

1. Design a credible and “independent” evaluation function in a strategic and comprehensive manner to ensure the organizational location will support collaboration across offices. Note: not independent like Offices of Inspector General, but respected by yet not perceived to be co-opted by program management. The Chief of Evaluation Office at the Department of Labor provides a superb model.

2. Offer incentives to encourage program managers to ask for evaluation work to support learning and performance improvement, not solely for accountability. Senior executives should develop their divisions’ learning agendas that specify how and when evaluation support can further managerial learning objectives.

3. Design and empower the evaluation function to take a lead role in informing organizational leadership and management about the credibility of evidence for assessing both past performance and the prospective relevance of different kinds of evidence to inform and support learning, performance improvement, and decision making.


Kathryn Newcomer is the director of the Trachtenberg School of Public Policy and Public Administration at George Washington University.

View all posts by Kathryn Newcomer

Related Articles

There’s Something in the Air, Part 2 – But It’s Not a Miasma
Insights
April 15, 2024

There’s Something in the Air, Part 2 – But It’s Not a Miasma

Read Now
To Better Forecast AI, We Need to Learn Where Its Money Is Pointing
Innovation
April 10, 2024

To Better Forecast AI, We Need to Learn Where Its Money Is Pointing

Read Now
Three Decades of Rural Health Research and a Bumper Crop of Insights from South Africa
Impact
March 27, 2024

Three Decades of Rural Health Research and a Bumper Crop of Insights from South Africa

Read Now
A Community Call: Spotlight on Women’s Safety in the Music Industry 
Insights
March 22, 2024

A Community Call: Spotlight on Women’s Safety in the Music Industry 

Read Now
Using Translational Research as a Model for Long-Term Impact

Using Translational Research as a Model for Long-Term Impact

Drawing on the findings of a workshop on making translational research design principles the norm for European research, Gabi Lombardo, Jonathan Deer, Anne-Charlotte Fauvel, Vicky Gardner and Lan Murdock discuss the characteristics of translational research, ways of supporting cross disciplinary collaboration, and the challenges and opportunities of adopting translational principles in the social sciences and humanities.

Read Now
Coping with Institutional Complexity and Voids: An Organization Design Perspective for Transnational Interorganizational Projects

Coping with Institutional Complexity and Voids: An Organization Design Perspective for Transnational Interorganizational Projects

Institutional complexity occurs when the structures, interests, and activities of separate but collaborating organizations—often across national and cultural boundaries—are not well aligned. Institutional voids in this context are gaps in function or capability, including skills gaps, lack of an effective regulatory regime, and weak contract-enforcing mechanisms.

Read Now
Charles V. Hamilton, 1929-2023: The Philosopher Behind ‘Black Power’

Charles V. Hamilton, 1929-2023: The Philosopher Behind ‘Black Power’

Political scientist Charles V. Hamilton, the tokenizer of the term ‘institutional racism,’ an apostle of the Black Power movement, and at times deemed both too radical and too deferential in how to fight for racial equity, died on November 18, 2023. He was 94.

Read Now
5 1 vote
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments