The quality of nonprofit management is a widespread and growing concern
among the philanthropic community. According to Lester Salamon, "In
addition to the fiscal and economic challenges confronting the nonprofit sector
at the present time is a third challenge, a veritable crisis of effectiveness.
Because they do not meet a ‘market test,’ nonprofits are always vulnerable to
charges of inefficiency and ineffectiveness. However, the scope and severity
of these charges have grown massively in recent years."1
This concern has led grantmakers to invest substantial sums in nonprofit
efforts to build management capacity. Technical assistance grants
pay for outside consulting, while general operating support and management
development grants can fund internal management costs. In
addition, grantmakers have supported the development of a growing
community of management support organizations (MSOs) around the
country that provide management support to local nonprofits.
These investments in capacity building have shown a marked increase
in recent years, not only in absolute dollars, but as a percentage of total
grant dollars. According to information reported to the Foundation Center,
from 1997 to 2001, a period when total grants grew steadily, technical
assistance grants increased from $62 million to $218 million and from
0.8 percent to 1.3 percent of total grant dollars. Similarly, management
development grants increased from $60 million to $260 million, and
from 0.8 percent to 1.6 percent of all grant dollars. In addition, general
operating support increased steadily during this period, reaching 13.6
percent of total grant dollars in 2001.2
For some grantmakers, capacity building is simply a new name for
what they have been doing for years through technical assistance grants
and support for MSOs. For others, capacity building means stepping
back from such practices, and making a concerted effort to learn about
the impact of prior work, compare approaches, and make adjustments.
There seems to be a growing recognition that nonprofit improvement is
difficult, and that grantmakers need not only to understand the challenges
but to learn from each other. This interest in learning led to the formation
of Grantmakers for Effective Organizations (GEO), an affinity
group of the Council on Foundations. Founded in 1988, GEO consisted
of 360 organizational members in 2003. The purpose of GEO is
described in its bylaws:
. . . to promote learning and dialogue about the effectiveness
of nonprofit organizations among funders, about the effectiveness
of nonprofit organizations working to build a more
just and sustainable society. The organization does this by
exploring the wide range of strategies for accomplishing
organization-building; and the constructive and catalytic
roles funders can play in encouraging and supporting organizational
effectiveness among nonprofits.
GEO seeks to support grantmaking nonprofit organizations
in increasing their effectiveness, to strengthen the overall
practice of organizational effectiveness grantmaking, and
to increase attention to organizational effectiveness within
the broader foundation and nonprofit communities.
Today, there is much at stake as the interest in helping nonprofit effectiveness
has outpaced the field’s knowledge of what does and does not
work. Despite the growing interest, there are lingering questions about
the impact of investments in capacity building. Studies show that results
are mixed, and some grantmakers are disappointed in their results, as
grantees remain unstable despite years of investments.
It is tempting to attribute the lack of real improvement to the fact that
organization change is difficult, and has an inherently low rate of success.
Perhaps grantmakers have undertaken this work with unrealistically
high expectations. Barbara Kibbe cautions that, "We could easily have a
chilling effect on what is a constructive, holistic approach to supporting
organizations as well as programs if we set unrealistic expectations by
seeming to imply that modest efforts at capacity building should have
impacts far beyond their depth or intensity."3
It is also tempting to blame external conditions that explain why,
despite successful assistance, an organization floundered. In other words,
the treatment was effective, but the patient died anyway. A competing
explanation, one that should not be ignored, is that some capacity building
programs have simply not been very effective. Leaders of GEO
express concern that weak results from poorly designed or implemented
programs will cast doubt on the value of such investments.
Rather than accepting a modest impact, the field needs to understand
the reasons for lack of improvement and develop better approaches.
Grantmakers can design more effective approaches
and consultants can provide more effective assistance. While additional
funding will improve impact for some, for others more effective programs
and consulting can be achieved within the same budget. High-quality
capacity building programs are within the reach of all grantmakers, at
varying levels of investments.
What is Capacity Building?
Capacity building is defined as actions that improve nonprofit effectiveness.
Nonprofit managers are responsible for building capacity, although
they may get assistance from consultants or others. Grantmakers get involved
by developing capacity building programs that provide resources to
support nonprofits as they work to improve their effectiveness. Capacity
building programs are often designed with a specific type of grantee in
mind, a particular set of issues to address, or a goal of improving one area
of nonprofit performance. Others are more loosely constructed and offer
support to any type of nonprofit to address whatever issues will improve
A sponsor designs and delivers a capacity building program but does
not necessarily fund it. Capacity building programs can be sponsored and
funded by a single grantmaker, such as the Organization Effectiveness
Grants made by the Packard Foundation. Programs can also be sponsored
by an independent consultant, consulting firm, or management
support organization, who also solicit funding for the program.
Management support organizations (MSOs) are nonprofits, often
supported by a large number of funders, who assist other nonprofits
to improve their effectiveness. They provide a range of services from
reference materials, training, or opportunities for networking to on-site
consulting. Many MSOs provide regional support to nonprofits, such as
CompassPoint in San Francisco, Community Resource Exchange (CRE)
in New York City, and the Support Center for Nonprofit Management in
New York City. Other MSOs, such as National Arts Strategies and the
Environmental Support Center, generally confine their assistance to one
program area and support nonprofits across the country.
Capacity building support can address management practices, provide
financial resources, or both. For some nonprofits, additional resources
are all that is needed to become more effective. While many nonprofits
seem to suffer from "poor management"—tasks not getting done, poor
planning or poor communication—the cause may be lack of staff, equipment,
or even office space. An effective capacity building intervention
may be as simple as funding an assistant director position or providing
additional overhead funds. Grantmakers can provide extra resources in a
variety of ways: providing loans, grants for capital projects, funds for
administrative staff, or funds for technology.
While additional resources can be valuable, or even critical, to a nonprofit’s
effectiveness, they are not particularly difficult to implement. If a
nonprofit needs an assistant director, providing funds for the position
solves the problem (at least until this funding runs out). A far greater
challenge comes from trying to improve a grantee’s management practices.
This type of capacity building relies heavily on outside consultants
who can help in a variety of ways, such as building the staff’s knowledge
and capabilities in specific management areas, helping design systems and
procedures, improving decision processes, facilitating discussions, coaching
leaders, and resolving conflicts.
Capacity building programs that seek
to improve nonprofit performance by improving management practices are
widespread, but they present very difficult challenges.
For nonprofit leaders, organization change can prove a daunting undertaking,
even for highly skilled leaders. For consultants, the challenge is to
bring their expertise to bear on issues that have to be solved by others.
And for grantmakers, the challenge is to entice grantees to undertake
capacity building work, which is only successful if grantees are motivated.
Overall, improving management practices can be quite a challenge
for all involved.
In general, having a goal of management improvement is not a sign of
weakness, but a sign of organizational strength. All organizations, commercial
as well as nonprofit, need to make adjustments to their structures
and systems, acquire new skills and capabilities, and adjust their strategies
in order to be effective. Excellent organizations constantly seek to
improve program implementation, develop new resources or address
unmet needs in the community.
At the same time, it should not be surprising that the nonprofit world
includes many organizations that lack important management capabilities,
largely because some nonprofit leaders come into these positions
without much experience or training in management. A lack of management
experience is not a fatal shortcoming, however, as there are far more
important skills and talents that nonprofit leaders bring to their organizations:
expertise in a program area, knowledge of the community, respect
of community leaders, access to a network of resources, strong interpersonal
and negotiation skills, a compelling vision, and the ability to persuade
others to join in their effort. Management skills and capabilities
can be learned, particularly if capacity building support is available.
Defining Nonprofit Effectiveness
The goal of capacity building is to help nonprofits become more effective,
but there are different views about what constitutes an effective
nonprofit. For some, effectiveness is viewed as a set of organization capabilities,
practices, and behaviors. The David and Lucile Packard Foundation
describes its view of effectiveness this way:
Organizational effectiveness is difficult to define precisely and
impossible to reduce to a set of attributes or activities. It is a
rich blend of strong management and sound governance that
enables an organization to move steadily toward its goals, to
adapt to change, and to innovate. The pursuit of organizational
effectiveness means continuous learning and improvement
in the management of resources and the coordination
and leadership of people. It assumes clarity of vision and alignment
of goals and activities with that vision. It embraces the
importance of defining hoped-for outcomes and the need to
measure progress toward achieving those outcomes. And, it
implies periodic reflection and critical self-assessment to
reevaluate the organization’s role in the context of an increasingly
complex and ever-changing society.4
GEO has developed a working definition of organizational effectiveness:
the ability of an organization to fulfill its mission through a blend of
sound management, strong governance, and a persistent rededication to
achieving results. This definition focuses on fulfilling mission as an outcome
and a set of capabilities to achieve it, although the capabilities are a
bit vague ("sound management," "strong governance").
Here, improved effectiveness is defined in terms of organization
performance, not as a set of management capabilities or practices.
Thus, an organization is not more effective because it has a strategic plan;
it is only more effective if the planning process leads to specific outcomes
such as better program outcomes, expanded programs, or a more stable
organization. Similarly, a board is not more effective just because it has a
higher attendance rate at meetings. A board is more effective if the organization’s
performance improves as a result of the board’s actions.
There is also considerable debate over how to assess nonprofit performance.
For this analysis, four aspects of performance are defined.
Acknowledging that capacity building programs may be designed with
very different goals in mind, "improved performance" may describe
improvements in one or more of the following aspects of performance.
Organization stability. At the very least, an effective organization must
consistently deliver its programs and services and survive over the longer
run. Delivery requires management systems to attract and retain staff and
organize the work. A stable organization is able to adapt to changes in
funding or community needs. It also attracts sufficient resources, both
financial and volunteer, to continue to operate. If financial support for
the organization’s mission has declined, the organization must be able to
adjust the level of programs and services accordingly. The most basic
measure of organization stability is whether an organization survives,
and whether programs and services have consistently been delivered.
Financial stability. Financial stability protects organizations from going
out of business due to unexpected short-term events. Financial stability
refers to short-term survival, while organizational stability is concerned
with attracting resources for long-term survival. Organizations can be
considered financially stable if they have sufficient working capital to
meet normal fluctuations in cash flow and sufficient reserves to meet capital
needs, such as building maintenance and replacement. One measure
of financial stability is working capital as a percentage of the total budget.
Capacity building work often ignores this area of performance.
Program quality. The consistent delivery of programs and services does
not guarantee that programs will be of high quality. Particularly for social
services, quality can only be determined by assessing a program’s
long-term impact. But very few programs have such information, so surrogate
indicators of program quality are needed, such as whether: 1) a
program design is based on research about effective programs; 2) a system
for outcome measurement is in place that provides management with
short- and intermediate-term measures of performance; 3) management
uses such data to make improvements to program implementation and
design. An organization using all three management practices is much
more likely to have high-quality programs. Here again, program quality
is a neglected goal of much capacity work.
There is also a connection between organization stability and program
quality, as it is unlikely that a program will have a long-term impact
unless it has been consistently implemented for three years or longer.
Thus, a stable organization is a prerequisite to high-quality programs.
Organization growth. Attracting
additional resources and providing more programs and services can be an indicator
of an effective organization. Too often, however, organizations suffer from unhealthy
growth that leaves them financially unstable and hurts program quality. Growth
alone, then, is not a useful indicator without taking into account financial
stability and program quality. At the same time, an organization need not
grow to be considered effective. Both stable and growing organizations
can be considered effective, whereas decline may indicate ineffective
Much of the data and conclusions presented here are derived from two
sources of information: 1) an analysis of existing research on capacity
building and related topics, and 2) interviews with experienced capacity
builders. The author’s review of research included over 100 articles on
organization effectiveness and organization change, many from the field
of Organization Development. In addition, more than thirty evaluations
of capacity building were reviewed. Evaluations provide important information
about programs, challenges, and outcomes. Many programs
report that change is disappointing based on short-term evidence, and
describe plans to improve impact. From this analysis, the author developed
preliminary hypotheses about factors that influence capacity building
impact, and incorporated them into interviews with capacity builders
about their experiences.
Interviews with Capacity Builders
The second source of information for this analysis is interviews and background
discussions with more than a hundred individuals experienced in
capacity building work—primarily foundation staff, intermediaries who
design programs, and consultants who provide assistance to nonprofits.
An initial round of interviews, conducted from October 1999 through
December 2000, provided an important "view from the field" that
helped shape the rest of the research and the structure of this book. Additional
interviews were conducted between mid-2001 and January 2003.
In general, questions were asked about the details of program implementation,
program effectiveness, evaluation methods, lessons learned,
program improvements, and continuing challenges. Experienced capacity
builders were asked about their successes and failures—grantees that
achieved significant improvement, and those that failed to make progress.
A variety of possible explanations were examined: the impact of external
conditions, grantee conditions and capabilities, actions of the consultant,
actions of the sponsor, and consulting skills and experience.
Thirty capacity building programs were studied in greater depth, many
with long histories and approaches that have evolved over time. From
these programs, a typology was developed of three general approaches to
capacity building, designated as Capacity Grants, Development Partners,
and Structured Programs. Nine programs were selected to illustrate these
The initial round of interviews revealed strong views about what
works, and significant disagreement among experienced capacity builders
about program design and impact.
- Some sponsors designed alternative approaches because they view
popular approaches as largely ineffective. These sponsors are
skeptical of the claims offered by some grantmakers about their
impact with grantees. Either evidence from these programs has
not been offered, or they find it unconvincing.
- Many considered the quality of consulting to be uneven, and
responsible for poor results.
- Some believe that the consulting approach has a strong impact on
client progress, apart from the consultant’s skills and knowledge.
Several sponsors were using a consulting approach that they
believe differs from traditional consulting and is more effective.
- Interviews also revealed two opposing views of organization
change: that change is a simple process, and that change is
complex. This perspective on change influenced not only the
design of capacity building programs, but views about effective
Through these interviews, it became clear that to understand a capacity
building program, it is most important to uncover the assumptions
and beliefs about organization change, organization effectiveness, and
consulting effectiveness that underlie the design and implementation of
any program. The next section lays the groundwork for this by summarizing
relevant research on characteristics of effective organizations, the dynamics
of organization change and improvement, and research on consulting.
Research on Nonprofit Effectiveness and Improvement
To truly understand an organization, try to change it. —Kurt Lewin
Capacity building presumes that someone—grantmaker, consultant, or nonprofit
leader—has a point of view about how to improve a nonprofit’s performance.
But what is the basis for this view? It seems that much capacity
building assistance is based largely on common-sense notions about good
management, popular management books, and assessment tools. Proponents
of capacity building recognize this gap in knowledge and have called
for additional research. While research specifically on the special management
challenges of nonprofits is certainly needed, more than fifty years of
research and thousands of empirical studies about organization effectiveness
and organization change already exist. This section reviews research
on two questions central to capacity building.
- What management practices are common to high-performing
- How do organizations improve performance?
It is important for grantmakers, nonprofit leaders, and consultants to
understand the strengths and limitations of different types of research so
that they can interpret and make use of findings. Thus it may be useful to
first discuss what organization research is and how it is carried out.
Organization Research: A Brief Primer
Theories about organizations are tested using either quantitative or qualitative
techniques. Quantitative research evaluates data from a large sample of
organizations, focuses on a limited number of organization characteristics,
and tests whether a particular characteristic (such as the extent of centralization)
is related to other factors. Quantitative research is particularly
useful for testing which characteristics are associated with organization
performance. A strong sample should include both high-performing and
low-performing organizations, based on an independent measure of performance
or multiple measures of performance. Many management practices
are not found to be "significant" predictors of organization success
because they are present in both high- and low-performing organizations.
For example, if both high- and low-performing organizations have strategic
plans, then plans per se will not be a distinguishing characteristic of
Researchers are careful not to overstate findings because, even if statistically
significant, associations between organizational practices and
performance are usually far from perfect. Organizations that exhibit
important characteristics do not always demonstrate high performance,
and some that deviate greatly from these characteristics attain high levels
of performance. While empirical studies provide support that some practices
are likely to produce better results, they also remind us that there are
many paths to successful performance.
In applying findings from empirical research, practitioners should also
be aware that just because a particular practice is associated with higher
performance does not mean that it caused the improvement. For example,
it has always been unclear whether organizations with strong balance
sheets spend more on staff training because they can afford to do so, or
whether spending additional money on training brings about higher performance.
There are relatively few empirical studies that can test for causality.
Only studies that include longitudinal data—a snapshot of
organization capabilities taken at more than one point in time—can test
for a causal relationship.
Most quantitative studies show an association but do not explain how
or why a particular factor impacts performance. To be useful to managers,
findings should be explained by a well-developed theory or by
qualitative, field-based research. Single case studies are most common, but
often generate few insights. A study that compares several organizations is
better able to highlight factors that can be overlooked with a single case.
Would the actions taken by managers be equally effective if the organization
had different leadership, strategy, or culture? Would an untried tactic
have been even more effective?
The combination of large sample empirical studies and rigorously
designed comparative case studies, of both high- and low-performing
organizations, provides a solid foundation to draw conclusions about
what works. While well-designed empirical research reveals important
insights about high performance, less rigorous research is all too common
and often misleading. As a result, Karl Weick, a leading organization scholar,
observes that "learning is superstitious and misleading, and what appears
to be knowledge creation in fact becomes the enlargement of ignorance."1
For example, some studies describe the management practices of
high-performing organizations based only on a sample of high performers.
Many popular books on management are based on such "research." The
most famous example is In Search of Excellence, a best seller in the
mid-1980s written by two very experienced management consultants.
They identified a number of factors that were common across a sample of
excellent organizations, and held them up as "best practices" to be
imitated. The excellent companies, however, did not fare very well after the
publication of the book. A follow-up study by academics showed that
this sample of firms in fact performed less well than the average firm in
the Standard & Poors 500.
Hundreds of empirical studies have examined the relationship between
organization characteristics and performance. Characteristics commonly
used in research include: Environment; Mission, Goals, and Strategy;
Strategic Planning; Decision Process and Communication; Culture; Control
Systems; Structure; Leadership; Human Resource Management; Performance
Measurement; and Board of Directors. Research
tries to capture important dimensions of each characteristic, such as
whether the environment is stable or turbulent; structure is centralized or
decentralized; and reward systems are tied to performance. From this
body of research, a few central conclusions are important for capacity
Internal consistency is important. A common finding is that consistency
between components of an organization leads to higher performance.
For example, a decentralized structure within which frontline staff make
decisions is more effective if supported by recruitment and training. Similarly,
a fundraising strategy that calls for expanding the pool of individual
donors will depend on how actively the board is involved in fundraising.
Many studies show that the consistency, or fit, between components is more
important than the choice of any particular component.
The implication for capacity building is that change should be holistic,
rather than piecemeal, as any major issue facing the organization may
require changes in a number of areas. For example, to improve the financial
management of a nonprofit requires more than a new computer system
and staff training on how to use it. The senior staff and board should
incorporate financial information into their decision process, and leaders
must be willing to make difficult decisions based on better financial information.
Financial management will not improve unless behaviors and
attitudes support the use of new systems.
Culture is central to performance. A consistent finding in organization
research is the importance of organization culture. "All areas of the
literature: theoretical, anecdotal and empirical suggest that organizational
culture is central in determining organizational outcomes, including
performance."2 Organization culture refers to commonly held values,
beliefs, and attitudes that shape the behavior of organization members,
and is often overlooked by nonprofit managers as an explanation for high
performance. There is a strong consensus among researchers that the following
practices, behaviors, and attitudes are important to the effectiveness
of any organization:3
Performance matters. High performance begins with the belief
that nonprofits should be effective at producing outcomes and
efficient in the use of resources, and that improvement efforts are
worthwhile. Capacity building work often runs into trouble when
staff are not enthusiastic about improving performance.
Management by fact. Staff believe that it is important to set
goals, collect data, and track progress toward goals. Problem
solving is based on valid data.
Open discussions. Staff is comfortable identifying problems and
raising issues, even when they might prove embarrassing to the
organization, particularly its leaders. Honesty and candor are
highly valued. Conflict is managed, not suppressed or avoided.
Efforts are made to uncover disconfirming data and contrary
opinions. The use of power has a powerful impact on openness.
The autocratic use of power can create risk-averse behaviors and
ineffective problem solving.
Problem solving. Effective problem solving has several steps:
identify problems; analyze root causes; develop solutions to
address the most important causes; implement solutions; and
evaluate progress. It relies heavily on the use of data to identify
problems and evaluate solutions. Nonprofits that do not collect
data engage in problem solving based on intuition, and are often
Learning. Learning is more proactive than problem solving, as the
organization does not wait to be confronted with a problem.
Organization learning requires an investment of time to evaluate
failures and even to question how successful efforts could be
improved. Groups that value learning set aside time to reflect on
how well they are performing.
Demonstrate effectiveness. Not all nonprofit managers consider it
important to demonstrate the effectiveness of their work, and
believe it is enough to undertake enormously difficult and
challenging work, such as fighting the causes of poverty and
racism. They do "good work," but should not be held
accountable for proving it. Managers who believe it is important
may well be more effective because the process of asking
questions about effectiveness leads to learning and improvement.
Focus on the future. Efforts to improve financial stability and
organization stability are based on the belief that it is important
to have sustainable organizations. While all managers face time
constraints, managers choose to take time away from the
demands of daily operations to focus on the future.
The choice of how to allocate current funds pits the desire to
help people today against the desire to have a sustainable program
or organization. An organization’s financial situation provides
important evidence of management’s future orientation. Managers
focused only on the present may find it acceptable to devote every
available penny to delivering the current program, without putting
funds aside for working capital, building maintenance, or emergency
Willingness to make tough decisions. To improve performance,
nonprofit leaders may be faced with unpleasant decisions, such as
reducing or closing a program, providing candid feedback to staff
on job performance, or even terminating an under-performing
employee. Some leaders simply do not believe that these steps
should be necessary in a nonprofit organization.
There is no simple recipe for management success. Much has been
learned about the role of structure, decision processes, and other characteristics
—how they relate to environmental conditions, how they relate
to each other, and their effect on performance. The most important conclusion
about which factors are associated with high performance is that
it all depends. Studies frequently show that the impact of a particular
organization component is contingent on some aspect of the organization’s
context—the environment, strategy, size of the organization, or
type of work. For example, the most effective organization structure
depends on the rate of change in the environment or the size of the organization,
and may differ by industry or sector. No particular strategy,
structure, or control system is correlated with high performance in every
situation, across all studies, which means there are no hard and fast rules.
This body of research suggests that designing and managing organizations
While research has generated a great deal of knowledge and insights
about individual components of organizations and how they relate to
performance, it is difficult to summarize succinctly, and beyond the scope
here. To learn more about organization design, consultants and
nonprofit managers can consult review articles summarizing research on
a particular topic,4 or refer to a general organization theory textbook that
reviews research in the field.5
There are multiple patterns of effective management. Another general
conclusion from research is that there is no single best way to manage,
even for similar organizations. While research identifies management
practices that are ineffective in a given situation, there is often more than
one pattern of effective management.
One reason that multiple patterns of effectiveness are possible is
that organizations present managers with paradoxes. As managers
take actions to improve effectiveness—such as introducing more rules and
procedures to achieve greater coordination—they solve one organizational
issue while creating new ones. For example, an increase in rules may well
lead to better coordination, but can also lead to lower levels of innovation
and adaptability. An alternative is to use frequent face-to-face meetings,
strong cultural norms, and a common understanding of goals, but
such practices require significant time of leaders and staff. Rules and procedures
may well be more expedient.
Bob Quinn describes two important dimensions of organizations that
present competing demands on managers: internal versus external focus;
flexibility versus control. Using these two dimensions, Quinn describes
the competing values framework, which highlights four different views of
These views of effectiveness seem to conflict:
We want our organizations to be adaptable and flexible, but we
also want them to be stable and controlled. We want growth,
resource acquisition, and external support, but we also want
tight information management and formal communication. We
want an emphasis on the value of human resources, but we also
want an emphasis on planning and goal setting. The model does
not suggest that these oppositions cannot mutually exist in a
real system. It suggests, rather, that these criteria, values, and
assumptions are oppositions in our minds. We tend to think
that they are very different from one another, and we sometimes
assume them to be mutually exclusive.6
An organization that neglects any one of these areas is unlikely to
succeed over the longer run. Empirical research using this model shows
many patterns of effective organizations, each with different strengths.
While each dimension is helpful to performance, too much of any dimension
can lead to "ineffective" patterns. Rather than thinking of organization
design choices as "solutions" or "best practices," this research emphasizes
that managers have to resolve competing pressures in order to achieve
In a comprehensive review of management research, Andrew Pettigrew
reaches a similar conclusion:
Overall, a general message emerging from the literature in this
chapter is that there are no universal solutions; no ‘magic bullets’.
The way forward lies in customization. This is manifest in
the move away from a search for a universal blueprint for leadership,
a retreat from the idea of ‘optimal’ organizational
structures for delivering high performance, a de-emphasis of
‘best-practice’ in organizations in favour of good alignment
and ‘fit’ between different configurational parts of the organization
(and indeed, between the organization and its environment).7
Implications for Nonprofit Management
As yet, relatively few empirical studies have focused on nonprofits. A
number of studies have examined one aspect of management—such as
the board of directors or strategic planning—and have drawn conclusions
about effective practices. Much more work is needed to identify
patterns of effective nonprofit management that will offer managers
insights for organizations of different size, stage of development, and
type of services. At this point, the best guidance for managers comes from
the small body of research on nonprofits, and the much larger body of
work on organizations in general.
Even without strong empirical evidence, nonprofit managers are
offered advice—in books, articles, assessment tools, and from consultants
—about steps to improve performance. Much of the advice focuses
on formal planning processes, formal procedures, and documentation as
the keys to better management. Organization research suggests that organization
culture may be far more important to effective performance than
this advice would suggest. Two separate research studies have found that
some of the conventional wisdom is, in fact, not helpful to improving
Thomas Holland has conducted considerable research on nonprofit
boards, and describes the origin of advice on managing boards:
On closer inspection, however, it is apparent that most of this
literature is based almost entirely upon individual experience
and opinion, tends to be exhortative rather than empirical, is
more anecdotal than systematic, and provides a limited basis
for understanding the problems or improving the practices of
governance. The advice of a few observers of boards tends to
stress idealized, even romanticized, versions of what boards
should be (for example, Carver, 1990), while others include
important details about budgets, planning, or other functions,
but offer little help on how the board can assess its performance
or take purposive action to become more effective as a group.8
Holland developed an instrument for assessing board effectiveness and
tested whether the questions were useful in discriminating between boards
with high and low performance. After considerable effort and revisions, he
developed a set of questions that are associated with performance. Other
than Holland’s Board Self-Assessment Questionnaire (BSAQ) instrument
for assessing board effectiveness, few self-assessment tools have been validated
in this manner.9 With other assessment tools, managers should
note that there is often no empirical evidence that the factors selected
(which are often very prescriptive) will lead to higher performance.
Another study demonstrated that for a cross section of health and welfare
organizations, the conventional wisdom about "correct practices" does not
relate to performance. A group of nonprofit practitioners (executives,
technical assistance providers, and funders) were asked to identify the criteria
they actually use in evaluating the performance of nonprofit organizations.10
The initial list was reduced to eleven "objective" indicators of
effectiveness, which reflect the general view about good management
practices in the field.
- Mission statement
- Use of form or instrument to measure client satisfaction
- Planning document
- List of calendar of board development activities
- Description of or form used in CEO performance appraisal
- Description of or form used in other employees’ performance
- Report on most recent needs assessment
- By-laws containing a statement of purpose
- Independent financial audit
- Statement of organizational effectiveness criteria, goals, or
- Board manual
In a sample of 64 nonprofits, the study found that "correct procedures"
are not related to organization effectiveness. It seems that indicators
commonly considered important by nonprofit practitioners are not
robust. If there are factors that predict organization effectiveness, they
are missing from the list.
Culture may help to explain why functional capabilities viewed by
many as important are not linked to performance. Holland’s research
reached the same conclusion, as he found that board practices and behaviors
were key to understanding board effectiveness, rather than formal
procedures and systems. Strategic planning provides a useful example of
the importance of practices and behaviors. Research has found that
whether a nonprofit has engaged in strategic planning is often not correlated
with performance.11 A possible explanation is that organizations
develop plans that are not of high quality, or are never implemented.
Rather than simply encouraging nonprofit managers to undertake strategic
planning, it might be more useful to focus on the factors that make a
strategic planning process effective: Did the organization candidly assess
its own program outcomes? Do organization managers and staff ask
whether the programs are effective and how they might be improved? Do
managers and staff learn from research on program impact and model
programs that have demonstrated long-term impact? Are organization
leaders actively assessing external trends related to the program? Is feedback
from clients solicited?
Similarly, the key to useful strategies and adaptability is not the yearly
(or twice a decade) planning process, but whether the organization thinks
and acts strategically on a daily basis. Do staff seek out model programs?
Collect and use data about program impact? Constantly question what
factors are critical to impact and how to increase those factors? Question
whether limited resources are being used most productively? Such "strategic
behaviors" may be better indicators of effectiveness than simply asking
about "management artifacts" such as strategic plans.
Improving Organization Performance
Knowledge about effective management practices is not easy to put into
practice. To improve performance, organization leaders begin with an
image of how the organization should function—which structures, decision
processes, systems, or cultural norms will make the organization more
effective. But even with a clear idea of what to change, leaders often find
that building new capabilities is a daunting challenge.
Researchers have long recognized that it is easier to describe a high performing
organization than to create one. After extensive research on
effective organization structures, Nohria and Ghoshal caution managers
that putting these important findings into practice promises to be
None of the foregoing analysis should be interpreted to imply
that adopting the optimal organization form for a company is
a simple or seamless process. . . . Managers must possess a
profound understanding of the business environment in
which they are operating to decide which organization form
is most appropriate for addressing the challenges of the particular
environment. Even if a manager successfully identifies
the ideal type of structure the firm needs, achieving the institutional
change necessary to implement it presents an additional
obstacle. Selecting the appropriate structure is not an
easy task; learning to manage it may be just as difficult.12
Research on organization change has generated important insights for
both managers and consultants. Five conclusions are particularly relevant
to the work of building the capacity of nonprofits: 1) underlying
issues must be recognized; 2) the difficulty of change needs to be understood;
3) client readiness should be evaluated; 4) the change process
should be managed; and 5) active leadership is crucial.
1. Underlying Issues Must Be Recognized
When nonprofit leaders seek help, the issue that they identify to the consultant
(the presenting issue) is often a symptom of deeper, underlying
issues. Projects that fail to address important underlying issues either
never get off the ground, or don’t improve performance.
Common presenting issues include fundraising, strategic planning,
information systems, personnel policies, or supervisor training. Yet consultants
often discover other problems—the mission is unclear, the board
is not engaged, leaders avoid discussing external threats or assessing
program impact. Some issues become obvious to the consultant, but are
embarrassing and threatening to nonprofit leaders. Leaders may not
want to bring up political conflicts, ineffective management skills or
style, or poor interpersonal relationships among the leaders. Even if
raised by the consultant, leaders may be unwilling to address such issues.
Improvements often fail because of underlying political and cultural
conflicts. Organization leaders and consultants can be more effective if
they anticipate political and cultural issues and learn how to manage
Political conflicts. If a power struggle is already present, an improvement
project can easily become a lightning rod for the conflict. In other
situations, the improvement effort itself triggers political conflicts. In
either case, political issues need to be resolved before improvement work
Political conflict can transform a "rational" intervention like strategic
planning into a useless exercise. One nonprofit engaged in a lengthy (and
expensive) strategic planning process that did not resolve conflicting
views or lead to program changes. The disagreement between two senior
leaders was less about strategic direction than who had greater power
and influence. Power struggles cannot be addressed by planning, hence
this strategic planning exercise was doomed before it got started.
External power relationships also impact performance. Many nonprofit
organizations are at the center of a diverse community of grassroots volunteers,
clients, staff, board members, and grantmakers who bring to bear
different skills, interests, and perspectives on issues of common interest.
Not only are there differences of opinion on important issues, but many of
these diverse groups bring differing types of power to the relationship.
Volunteers can withdraw their labor if they don’t like the nonprofit’s
direction or policies; supporters can withhold contributions. While political
conflicts occur in all organizations, the diffused nature of power may
make nonprofit actors less willing to defer to central authority and more
willing to assert their positions. Any disaffected group—community members,
advocacy groups, and even staff—can appeal to board members, grantmakers,
political figures, and even the press to influence decisions.
Political resistance can, and often does, thwart changes intended to
improve performance. At a private boarding school, for example, a headmaster
was named to improve the academic standing of the school. When
he tried to hire new teachers in the weakest department and force the
early retirement of several longtime faculty, there was an outpouring of
resistance from other faculty, parents, and alumni directed at members
of the board of trustees. An important cultural value of the school—personal
relationships and loyalty—was at the heart of the resistance, and
turned out to be more important than academic excellence. In trying to
change the culture of the school, the headmaster had not counted on the
power of the faculty and alumni to stop specific changes. Eventually, he
An improvement project can result in a shift of power that enhances
the organization’s effectiveness, but the process is rarely smooth. In one
national nonprofit, leaders of a local chapter refused to implement a rather
benign technical improvement to the accounting system. At the heart of
the dispute was an effort by field units to increase their influence over headquarters
strategy. The organization reassessed the respective roles and
authority of the headquarters and field units and decided on a new relationship
that provided adequate central coordination while encouraging
Cultural change. Improvement programs often call for the introduction
of new practices and behaviors throughout the organization. New practices
may conflict with existing values, beliefs and attitudes in the organization,
resulting in anything from lukewarm enthusiasm to outright
hostility. Managers and staff may be uncomfortable comparing their program
to others and questioning how it could be better; setting measurable
goals, tracking progress, and using data to make decisions; or thinking
about the long-term implications of decisions and developing multiyear
Research on large-scale change programs undertaken in the private
sector, such as total quality management and reengineering, not only document
that such programs often fail, but cite culture as a major reason. A
survey conducted by CSC Index, the consulting firm that introduced the
reengineering concept, found that 69 percent of the firms surveyed in the
U.S. and 75 percent of the European firms had engaged in at least one
reengineering project.13 Yet, 85 percent of those firms found little or no
gain from the change program. The authors concluded that failed programs
were treated as a technique or change program, while successful
programs integrated reengineering with an overall program that
addressed the organization’s direction, values, and culture.
Other studies confirm that long-term change and performance
improvement depend on cultural change. According to organization
scholars Cameron and Quinn, "Although tools and techniques may be
present and the change strategy implemented with vigor, many efforts to
improve organization performance fail because the fundamental culture
of the organization remains the same; i.e., the values, the ways of thinking,
the managerial styles, the paradigms and approaches to problem solving."14
They go on to explain why culture is so critical:
This dependence of organizational improvement on culture
change is due to the fact that when the values, orientations,
definitions, and goals stay constant—even when procedures
and strategies are altered—organizations return quickly to the
status quo. . . . Without an alteration of the fundamental
goals, values, and expectations of organizations or individuals,
change remains superficial and short term in duration.
An organization’s existing culture is often implicit, and the cause of
resistance may be unclear to all concerned. When faced with uncomfortable
change, members will identify a number of "rational" objections
that hide what is largely an emotional response. By facing and working
through these conflicts, staff can often figure out how to remain true to
their values while increasing performance. By ignoring cultural resistance,
leaders find that planned improvements are never fully implemented,
or don’t lead to improved performance.
Capacity building is based on the notion that effectiveness, efficiency,
and performance improvement are important goals. In contrast, some
board members, staff, and volunteers view planning, priorities, and the
collection of data as "business practices" that are not to be trusted.15
Conflict often arises when leaders decide to invest in programs that have
the greatest impact and eliminate less effective programs. Few issues are
more gut wrenching for nonprofit staff than a decision to stop programs
that benefit some or perhaps all of their clients. Less effective programs
may be the only ones available to a "difficult to serve" population. One
women’s shelter developed a transition program to provide job skills and
education to women likely to reenter the workforce. Eligibility criteria
were based on a careful study of successful participants. Some longtime
members of the staff were deeply troubled that some of the neediest
women thereby became ineligible, and did not agree with the new focus
on effectiveness. As a result, several staff left the program.
2. The Difficulty of Change Needs to Be Understood
The overwhelming conclusion from studies of organization change is that
most planned change efforts fail. More worrisome is the finding that
interventions can be successful in the short term, but not produce lasting
change. Thus, successful projects and satisfied consulting clients are not
good indicators of capacity building success. Fortunately, not all projects
are difficult, and the success rate can be quite high for simple, technical
The following issues indicate a higher degree of difficulty:
Power. Involve a shift of priorities and resources; change in
responsibilities and reporting.
Management style. Require a change in the way key managers
(the founder, board chair, or executive director) perform their
Behaviors, practices. Require a change in the way groups of
managers and staff perform their jobs; require new behaviors
and/or new skills.
Culture. Challenge widely shared beliefs and assumptions.
Values. Changes are based on values that appear to conflict with
other organizational values.
Difficult conditions require a more sophisticated understanding of the
change process by both consultants and managers. An important question
for capacity builders is whether a particular project is difficult and
calls for a consultant with more sophisticated skills. Experienced consultants
report, however, that cultural and political issues can be lurking
below the surface even in seemingly simple technical projects.
3. Client Readiness Should Be Evaluated
A motivated client can tackle even the most difficult change conditions
and succeed. Equally important, when the client lacks internal motivation,
there are few techniques or interventions an outsider can use to help.
According to Marvin Weisbord, a leading scholar and consultant, "I also
believe we can consult only under relatively narrow circumstances: where
a client leader is willing to stick his or her neck out, where there is a pressing
organizational dilemma, where some people are already searching for
a way out."16 Weisbord goes on to use the model of a "four-room apartment"
developed by Swedish social psychologist Claes Janssen that
describes how individuals deal with change.
In Contentment we like things the way they are. If somebody
‘helps’ us we may start with good-natured acceptance and
soon turn our backs on the helper if pushed to do something
new. We may even be thrown by the helper into denying that
the offer of help is a problem. In Denial we repress feelings
of anger, fear, anxiety brought on by change, pretending everything’s
okay. If we become aware of and own our feelings, then
we move into Confusion. In that room we admit openly that
we don’t know what to do, are worried, upset, unsure. We are
helpable. In Renewal we become aware of more opportunities
than we can actualize. Working through that (good) dilemma
puts us back in Contentment.
Janssen’s concept struck me as a useful diagnostic tool, the
simple way of assessing ‘readiness.’ We can’t consult to people
in Contentment or Denial. We should not even try. The best we
can do is validate people’s right to be there. The room hospitable
to flip charts, models, and rational problem solving is Confusion.
And we might be helpful in Renewal if we’re fast
enough with new ideas and can keep up with the clients.17
Readiness is a dynamic concept that begins with a willingness to seek
help, engage in a joint diagnosis, and learn about underlying issues; work
on issues; and persist in the face of difficult challenges. If changes are not
difficult, then clients are more likely to persevere and be successful. If,
however, difficult underlying issues are present, then the client’s readiness
to tackle difficult issues is critical.
In assessing readiness it is important to take into account whether the
client is informed and realistic about the challenges facing the organization.
Many clients are motivated to improve fundraising, expand programs
and services, or improve program impact, but are not well
informed about underlying issues, or the work required to achieve their
goals. They may be in "confusion" about how to raise more money, but
in "denial" about the board’s role in fundraising. With a realistic sense of
the challenges ahead, the question turns to whether organization leaders
are willing to devote sufficient time and are prepared to deal with barriers
to change. Painful choices may be required. Despite a strong desire to
increase fundraising, leaders may be reluctant to recruit new board members,
shift power from the board to staff, or challenge the founder’s autocratic
management style. Clients may be motivated to seek help, but not
enough to do what it takes to improve performance.
A common view of capacity builders is that comprehensive organization
assessment is an effective tool to promote readiness and improvement.
Some capacity builders rely on formal assessments to stimulate clients to
tackle areas where they lack "correct practices." Using formal tools, consultants
conduct a thorough analysis of the organization’s issues, explain
their findings to the client, and offer advice on how to address the issues
that emerge. In contrast, organization researchers and many experienced
consultants are skeptical about the value of presenting an outsider’s diagnosis
of organization issues. Instead, they believe that a diagnosis does
not deal with the more important question of motivation. Weisbord
concludes, "There is no direct connection between the accuracy of a diagnosis
and people’s willingness to act on it."18
4. The Change Process Should Be Managed
Research shows that improvement projects undertaken in similar organizations,
for similar reasons, can produce very different outcomes. The
difference is the way that leaders manage the change process. Three specific
actions can lead to more successful change initiatives: building support,
monitoring progress, and learning.
Build Support. If change has a direct effect on large numbers of managers
and staff, then it is important to take steps to build support from the
outset and to reinforce that support as changes unfold. When improvement
is first discussed, it is important for leaders to construct a compelling
case for why the change is necessary and how performance will be
affected. Building support requires much more than announcing changes
and answering a few questions. In some cases it is useful for leaders to
engage the organization in developing a future vision that spells out not
only long-term goals but specific organization capabilities that will be
required to achieve them.
Initial enthusiasm can wane, however, as change becomes personal.
Resistance can develop when individuals are asked to report to a new
boss or change the way they perform their job. Leaders may need to negotiate
with key players, using a combination of logic, persuasion, incentives,
and even the threat of sanctions.
Monitor Progress. Research reveals that the greatest challenges lie in
implementation, rather than in planning change. Scholars agree that
while planning provides an opportunity to build consensus, discuss and
anticipate many of the challenges ahead, organization change rarely
unfolds according to plan. They conclude that constant monitoring and
adjustment are essential to success. Rather than following an elaborate
blueprint for change, leaders define a few concrete steps to get the process
moving and engage in regular assessment of progress toward goals.
Many changes are inherently difficult and take time. Building new
capabilities often requires changes to everyday management practices,
group behaviors, and personal management style. An executive director
who has been effective at managing fifteen direct reports faces a new
challenge when another layer of management is added and he/she now
has to learn to manage through five senior staff. While some managers
find the transition easy, others do not. Similarly, while it is easy to
acknowledge that the board needs to function more effectively and play a
greater role in fundraising, it may take several years to reshape the board.
Developing new leadership, recruiting new board members, rotating
longtime members off the board, and introducing new practices to make
meetings more effective all take time and require personal learning and
change from each and every board member.
Emphasize Learning. Learning new behaviors and practices requires
the active involvement of organization leaders. The key is to make the
connection between new behaviors and improved performance, and to
point out old behaviors when they occur. Consistent feedback from managers
and peers is useful because individuals do not always notice when
they revert to old behaviors. One practice is allowing time for reflection
so that staff can discuss whether practices are being used effectively, and
whether they are working.
Researchers promote "action learning" as a powerful technique not
only to change individual behaviors but to develop lasting organization
capabilities and improve performance. Action learning is based on the
notion that practices and behaviors are best learned when applied to
solve real problems. New practices may seem straightforward in a training
session, but challenges and complexities become apparent when they
are applied to real issues. At first, participants are often unsuccessful and
need coaching from managers or consultants to make adjustments. When
new practices are effective, participants make a strong connection between
practices and results, which eventually changes their underlying beliefs as
well. Leaders can create opportunities for action learning by asking staff
to identify an important issue and set goals for improvement.
Increasingly researchers find that learning is critical to organization
success. "The process of change relies on the development and utilization
of less visible organizational capabilities (particularly those concerning
learning and change) called intangible assets. [They] . . . believe that an
organization’s ability to learn and change is the most fundamental of its
5. Active Leadership Is Crucial
Research shows that active leadership of the change process is crucial to
success. Leaders play a crucial role in building a case for change, diagnosing
underlying issues, and anticipating and handling resistance. Only
leaders have the power to negotiate political conflicts. Cultural change
will not occur without consistent support from the top. Rewards, promotions,
and hiring reinforce the new culture, and are only available to those
at the top. A study of nonprofit change found that change is more successful
when leaders are actively involved.20
Internal leadership is crucial to change, yet nonprofit managers may
have little training or experience relevant to the challenge. Outside assistance—through
leadership development workshops, coaching, or on-site
assistance—can be helpful to coach leaders on managing change and
improve the odds of success.
The Use of Consulting to Improve Performance
Because consultants have played a major role in capacity building to date,
it is useful to examine the record of consulting as a vehicle for producing
lasting change. The federal government has provided extensive technical
assistance as part of major policy reforms in agriculture, community
development, youth development, and education, hoping to change
local practice by providing expert knowledge to the front lines. Some
government initiatives were subjected to extensive research on long-term
impact. The history of government-sponsored technical assistance is particularly
relevant to capacity building programs, in which a three-party
relationship develops between local providers, consultants, and funders.
A recent review of more than fifty years of history with technical assistance
strategies reveals that technical assistance often fails to achieve
sought-for change. Rand conducted one of the most thorough studies of
educational innovation, examining data on 293 local projects funded by
four federal education programs.
The study found that while the federal programs stimulated
local education agencies to undertake innovative projects,
that participation did not insure successful implementation
and successful implementation did not insure continuation of
the project over time. . . . The Rand study deemed most of the
technical assistance strategies ineffective, especially those that
did not respond to the needs and motivations of teachers or
the basic conditions of school districts. . .21
The Rand report describes that, "Outside experts were typically
ignored because their advice was too abstract, or their awareness of local
problems was inadequate. . . . In short, federally supported assistance
efforts often were ineffective because they did not deal in an adaptive way
with the concrete problems facing local staff."22
While the government can insist on evaluations of technical assistance
programs, there is no such pressure on private sector consulting. A small
band of scholars and independent consultants have raised fundamental
questions about the impact of traditional consulting. Yet, private sector
consulting is a profitable and growing industry with over $50 billion in
revenues. Consultant Jack Phillips describes an all-too-common consulting
engagement that fails to bring about any change:
When the senior staff of the firm objected to the consultants’
report, the CEO, who had hired the consultants previously,
praised the work of the consultants and suggested their recommendations
be adopted. The staff resisted in every way and ultimately
did nothing with what was originally planned. The
recommendations were never implemented by the senior staff.
In a reference check by another organization seeking consulting
advice, the CEO praised the report and gave the consulting firm
very high marks for its efforts. Privately, he said, "Although we
did not implement all the recommendations and some were
already in planning, it was a good exercise for the organization."23
Robert Schaffer, the author of High Impact Consulting, agrees that
much consulting is ineffective and offers one explanation:
Throughout our lives, we are trained to depend on the
experts to give us the answers. . . . Conventional consulting
methodologies reinforce this perception by putting consultants
in the lofty role of diagnosticians and solution providers. This
mystical faith in what the consultant’s magic potions can
accomplish often motivates otherwise hardheaded business
executives to spend huge sums and considerable time and
energy on consulting projects that have no demonstrable connection
to bottom-line achievements.24
Evidence from both the private and public sectors raises questions
about consulting approaches for nonprofits. At the very least, nonprofit
consultants and grantmakers should be very careful before adopting
practices from private sector consulting or large-scale technical assistance
programs. It is clearly not safe to assume that if techniques or practices
are commonly used, they must be effective.
Notes for "The Need for Capacity Building"
1 Salamon, Lester M., America’s Nonprofit Sector: A Primer, 2nd Edition
(New York: The Foundation Center, 1999). P. 173.
2 Foundation Center, Foundation Giving Trends: Update on Funding
Priorities, 2003 edition.
3 Correspondence with Barbara Kibbe, December 17, 2002.
4 David and Lucile Packard Foundation, Organizational Effectiveness and
Philanthropy Program Guidelines.
Notes for "Research on Nonprofit Effectiveness and Improvement"
1 Weick, Karl E., "Drop Your Tools: An Allegory for Organizational
Studies," Administrative Service Quarterly, Vol. 41, June 1996, pp.
301–313. P. 309.
2 Pettigrew, Andrew, T.J.S. Brignall, Janet Harvey, and David Webb, The
Determinants of Organizational Performance: A Review of the
Literature (Coventry: Warwick Business School, March 1999). P. 50.
3 Baldridge National Quality Program, Criteria for Performance
4 Forbes, Daniel, "Measuring the Unmeasurable: Empirical Studies of
Nonprofit Organization Effectiveness from 1977–1997," Nonprofit
& Voluntary Sector Quarterly, Vol. 27, No. 2, June 1998, pp.
183–202.; Stone, Melissa, Barbara Bigelow, and William Crittenden, "Research
on Strategic Management in Nonprofit Organizations: Synthesis,
Analysis and Future Directions," Administration & Society, Vol. 31,
No. 3, July 1999, pp. 378–425.; Pettigrew, Andrew, T.J.S. Brignall, Janet Harvey, and David Webb, The
Determinants of Organizational Performance: A Review of the
Literature (Coventry: Warwick Business School, March 1999).
5 Daft, Richard L., Organization Theory and Design, 7th edition
(Mason: South-Western College Publishing, 2001).; Bolman, Lee G. and Terence E. Deal, Reframing
Organizations: Artistry, Choice and Leadership (San Francisco: Jossey-Bass
6 Quinn, Robert E., Beyond Rational Management: Mastering the
Paradoxes and Competing Demands of High Performance (San
Francisco: Jossey-Bass Publishing, 1988). P. 49.
7 Pettigrew, Andrew, T.J.S. Brignall, Janet Harvey, and David Webb, The
Determinants of Organizational Performance: A Review of the
Literature (Coventry: Warwick Business School, March 1999). P.118.
8 Holland, Thomas, "Self-Assessment by Nonprofit Boards," Nonprofit
Management & Leadership, Vol. 2, No. 1, Fall 1991, pp. 25–35. P. 26.
9 Jackson, Douglas and Thomas Holland, "Measuring the Effectiveness
of Nonprofit Boards," Nonprofit and Voluntary Sector Quarterly,
Vol. 27, No. 2, June 1998, pp. 159–182.
10 Herman, Robert D. and David O. Renz, "Nonprofit Organizational
Effectiveness: Contrasts Between Especially Effective and Less
Effective Organizations," Nonprofit Management & Leadership,
Vol. 9, No. 1, Fall 1998, pp. 23–38.
11 Stone, Melissa, Barbara Bigelow, and William Crittenden, "Research
on Strategic Management in Nonprofit Organizations: Synthesis,
Analysis and Future Directions," Administration & Society, Vol. 31,
No. 3, July 1999, pp. 378–425.
12 Nohria, N. and S. Ghoshal, The Differentiated Network: Organising
Multinational Organisations for Value Creation (San Francisco:
Jossey-Bass Publishing, 1997). P. 190.
13 CSC Index, (1994) State of reengineering report (North America
14 Cameron, Kim S. and Robert E. Quinn, Diagnosing and Changing
Organizational Culture: Based on the Competing Values Framework
(Addison Wesley Series on Organization Development, 1999). P. 9–10.
15 "Technical Assistance & Progressive Organizations for Social Change in
Communities of Color," A report to the Saguaro Grantmaking Board
(New York: The Funding Exchange, 1999).
16 Weisbord, Marvin R., "Towards a New Practice Theory of OD: Notes
on Snapshooting and Moviemaking," Research in Organization
Change and Development, Vol. 12, 1988, pp. 59–96. P. 63.
17 Weisbord, Marvin R., "Towards a New Practice Theory of OD: Notes
on Snapshooting and Moviemaking," Research in Organization
Change and Development, Vol. 12, 1988, pp. 59–96. P. 70.
18 Weisbord, Marvin R., "Towards a New Practice Theory of OD: Notes
on Snapshooting and Moviemaking," Research in Organization
Change and Development, Vol. 12, 1988, pp. 59–96. P. 66.
19 Pettigrew, Andrew, T.J.S. Brignall, Janet Harvey, and David Webb, The
Determinants of Organizational Performance: A Review of the
Literature (Coventry: Warwick Business School, March 1999). P. 71.
20 Nutt, P.C., "Selecting Tactics to Implement Strategic Plans,"
Strategic Management Journal, Vol. 10, 1989, pp.145–161.
21 Wahl, E., M. Cahill, and N. Fruchter, Building Capacity: A Review of
Technical Assistance Strategies (New York: Institute for Education
and Social Policy, New York University, 1988). P. 14.
22 Berman, Paul and Milbrey Wallin McLaughlin, Federal Programs
Supporting Educational Change: Volume 1: A Model of Educational
Change (Rand Corporation. Prepared for the U.S. Office of
Education, Department of Health, Education and Welfare
R-1589/1-HEW, September 1974). P. 38.
23 Phillips, Jack, The Consultant’s Scorecard: Tracking Results
and Bottom-Line Impact of Consulting Projects (New York:
McGraw-Hill, 1999). P. 4.
24 Schaffer, Robert H., High Impact Consulting: How Clients and
Consultants Can Leverage Rapid Results into Long-Term Gains (San
Francisco: Jossey-Bass Publishers, 1997). P. 133.