Recommendations for cdfi performance measurement

Page 1

Recommendations for CDFI Performance Measurement: Improving measures, increasing knowledge, building capacity

Marcus Lam Loh-Sze Leung Molly Scott

March 22, 2002


TABLE OF CONTENTS Executive Summary ____________________________________________________________I Recommendations _________________________________________________________________ I

Introduction _________________________________________________________________ 1 The CDFI Fund ___________________________________________________________________ 2

What are the Problems with the Fund’s Performance Measurement? ___________________ 3 Consequences of inadequate information ______________________________________________ 4 1998 GAO Report on Fund Performance Measurement __________________________________ 6

Policy Tensions—Why is it so difficult to measure performance? ______________________ 8 Financial Covenants: Standardized versus Tailored _____________________________________ 8 Performance Goals and Measures: Measurability versus Meaning _________________________ 9 Performance Goals and Measures: Meeting Current Constraints versus Building Long Term Capacity ________________________________________________________________________ 11 Performance Measures and Financial Soundness: Social Impact versus Financial Success ____ 12

Criteria for evaluation ________________________________________________________ 14 Political Feasibility _______________________________________________________________________14 Substantive Meaning _____________________________________________________________________14 Administrative Feasibility _________________________________________________________________14

Alternatives_________________________________________________________________ 15 Develop benchmarks to evaluate financial performance of CDFIs ________________________ 15 Alternative One__________________________________________________________________________15 Alternative Two _________________________________________________________________________18 Alternative Three ________________________________________________________________________19

Improve performance measurement to better capture the Fund’s “value-added”____________ 27 Alternative One__________________________________________________________________________27 Alternative Two _________________________________________________________________________28 Alternative Three ________________________________________________________________________30

Modify Report Mechanisms ________________________________________________________ 33 Alternative One__________________________________________________________________________33 Alternative Two _________________________________________________________________________35

Comparing the Alternatives ____________________________________________________ 37 Recommendations ___________________________________________________________ 38 Weighting the Criteria ____________________________________________________________ 38 Financial Soundness Recommendation _______________________________________________ 38 Performance Measurement Recommendation _________________________________________ 39 Reporting Recommendation________________________________________________________ 40 Data Management Recommendation ________________________________________________ 40

Conclusion _________________________________________________________________ 43


ii

References _________________________________________________________________ 44 Appendix A: CDFI Industry Overview and Common Data Project Background __________ 47 Appendix B: Explanation of Financial Ratios _____________________________________ 49 Appendix C: Important Limitations of CDP Data __________________________________ 52 Appendix D: Negotiation and Use of Performance Measures_________________________ 56 List of Interviews TABLES AND SIDEBARS Table 1.1 NET INCOME BY TYPE______________________________________________________ Table 1.2 NET REVENUE BY TYPE ____________________________________________________ Table 1.3 SELF SUFFICIENCY RATIO BY TYPE _________________________________________ Table 1.4 NET LOAN LOSS RATIO BY TYPE_____________________________________________ Table 1.5 LOAN LOSS RESERVE RATIO BY TYPE ________________________________________ Table 1.6 NET ASSET RATIO BY TYPE _________________________________________________ Table 1.7 DELINQUENCY RATIO BY TYPE _____________________________________________ Table 2.1 NET INCOME BY ASSET SIZE ________________________________________________ Table 2.2 NET REVENUE BY ASSET SIZE_______________________________________________ Table 2.3 SELF SUFFICIENCY RATIO BY ASSET SIZE ____________________________________ Table 2.4 NET LOAN LOSS RATIO BY ASSET SIZE _______________________________________ Table 2.5 LOAN LOSS RESERVE RATIO BY ASSET SIZE___________________________________ Table 2.6 LOAN LOSS RESERVE RATIO BY ASSET SIZE___________________________________ Table 2.7 DELINQUENCY RATIO BY ASSET SIZE ________________________________________

20 21 21 21 22 22 22 23 23 23 24 24 24 25

Sidebar 1. LIMITATIONS OF CDP DATA _______________________________________________ 26 Sidebar 2. IMPLEMENTING PERFORMANCE MEASUREMENT ALTERNATIVE TWO: TURNING OUTPUT MEASURES INTO IMPACT MEASURES_______________________________ 29 Sidebar 3. EXEMPLARY IMPACT STUDIES ____________________________________________ 32 Sidebar 4. EXEMPLARY DATA MANAGEMENT: NEIGHBORHOOD REINVESTMENT CORPORATION __________________________________________________________ 41 Sidebar 5. THE CALIFORNIA WELLNESS FOUNDATION—USING GRANTEE REPORTS IN EVALUATION_____________________________________________________________ 42


Executive Summary The Community Development Financial Institutions Fund (the Fund) has been instrumental in the expansion and development of the community development finance industry. The Fund is praised by practitioners for its ability to focus national attention on the issue of access to capital and its ability to generate public and private support for the sector. Prior to the Fund’s creation in 1994, there were half as many CDFIs as there are today managing one-third the assets. Yet, lack of information on the impact of the agency’s programs impedes the ability of the Fund to generate and sustain political and hence financial support. In addition, it prevents the Fund from using this information to evaluate and improve existing programs. The need to report the agency’s impact is critical to helping decision makers and the general public see the difference it has made in the community. Several policy tensions make it difficult to measure CDFI performance. Financial covenants can be tailored to meet individual circumstances or standardized across the industry. In terms of impact, the Fund must sometimes choose to favor either meaningfulness or measurability. At the same time, the Fund must strike a balance between gathering and using data to comply with current political demands and building its long term institutional capacity. Lastly, the Fund must meet a double bottom line of securing its investments in CDFIs and making sure that awardees are fulfilling their mission to serve low-income communities. Recommendations In this report, we look at different strategies that the Fund may use to accomplish three goals: improving measures of financial soundness; enhancing impact assessment; and perfecting reporting mechanisms. We evaluate competing strategies using the criteria of political feasibility, substantive meaning, administrative feasibility, and cost-effectiveness. We suggest that the Fund take the following actions. 1) Add additional financial measures to the four financial covenants currently used (Alternative two). This recommendation will give the Fund a better picture of each awardee's financial health. It will help the Fund evaluate its progress toward building capacity in the community development finance industry and creating greater access to capital.


II

2) Focus on fine-tuning and expanding activity and output measures to serve as proxies for impact (Alternative two). This recommendation will enable the Fund to gather more meaningful and more measurable information on its impact on CDFIs and the communities they serve. In this way, the Fund can better measure its direct impact and report it to political stakeholders. In addition, this information has the potential to help the Fund improve its programs and build institutional capacity. 3) Contract to a third party research organization to conduct either in-depth case studies of specific CDFIs, longitudinal studies for a specific region, or scientific experiments with a control group (Alternative three). This recommendation will allow the Fund to: a) tell a better story; b) approximate causality; and c) describe more community level impact of CDFI Fund dollars. 4) Continue to require CDFIs to submit several different reports for financial soundness and performance measures. We find that the Fund's current reporting mechanisms for financial soundness and performance goals and measures are the best approach under prevailing conditions. 5) Make greater use of all reports. Input information into a comprehensive database. Use narratives to illustrate accomplishments and build political and external support for the agency. Gathering information is meaningless without effective data management. The Fund has in its financial and annual reports a tremendous source of information that is underutilized. If properly stored and used, this data will add to the Fund's knowledge about its awardees' activities and its own impact. We are confident that these recommendations will improve the Fund's measures, increase knowledge, and build capacity.


Introduction The Community Development Financial Institutions Fund (the Fund) in the U.S. Department of Treasury has committed more than $480 million in grants, loans, equity, and incentives to promote access to capital and local economic growth in low-income and underserved communities since 1996. Through 2002, the Fund has received appropriations totaling $633 million. However, FY2001-02 was the first year in which the Fund sustained a cut in funding. In 2000-01, the Fund received $118 million. This year, the Fund only received $80 million, which was higher than the President’s budget recommendation of $68 million. The President’s 2002-03 budget again recommends only $68 million for the Fund. As the Fund’s roster of awardees has grown longer, and as more money has flowed out to the community, Congress and the Department of Treasury have become increasingly anxious to measure the impact of the Fund’s investments. The Fund has undergone several audits, including one in 1998 by the GAO that criticized the Fund for not focusing enough on measuring impacts and not implementing a functional post-award monitoring system to track awardees’ performance. Lack of information on the impact of the agency’s programs impedes the ability of the Fund to generate and sustain political and hence financial support. In addition, it prevents the Fund from using this information to evaluate and improve existing programs. The Fund is considered instrumental by many community development finance practitioners because of its ability to focus national attention on the issue of access to capital and its ability to generate public and private sector support for organizations that do this work. The need to report the agency’s impact is critical to helping decision makers and the general public see the difference it has made in the community. This report addresses the Fund’s need for better information about its own impact. The report begins with an examination of the performance measurement problems that the Fund faces and a discussion of the policy tensions and tradeoffs inherent in developing alternate proposals. The report then discusses several alternatives that focus on different parts of the Fund’s performance measurement system: 1) the content and type of performance goals and measures; 2) the use of financial soundness measures; and 3) actual reporting mechanisms. In addition to literature on the field and interviews with community development finance professionals, we


2

examine data collected through the Common Data Project, an attempt by the community development financial institution (CDFI) industry to collect standardized data across different sectors. Finally, we compare the alternatives and make recommendations for improving the Fund’s current processes. The CDFI Fund The Fund was created by in 1994 by a bipartisan initiative—the Riegle Community Development and Regulatory Improvement Act of 1994. This act established the Community Development Financial Institutions Fund (the Fund) as a wholly owned government corporation to promote economic revitalization and community development1 through an investment and assistance program for community development financial institutions. This legislation authorized the provision of technical and financial assistance to community development financial institutions and incentives for banks investing in CDFIs. The Fund’s mission is to promote access to capital and economic growth by directly investing in and supporting certified CDFIs2 and by expanding financial service organizations’ lending, investment, and services within underserved markets. The Fund’s goals are to: ◊

Strengthen and expand the financial capacity of CDFIs;

Strengthen the organizational capacity and expertise of CDFIs to better serve their communities;

Expand financial service organizations’ community development lending and investments; and

1

The Fund defines community development as purposefully improving the social and/or economic conditions of 1) underserved people (including low-income people and people who lack adequate access to capital and/or financial services); and/or 2) residents of distressed communities. An entity with a mission directed at serving distressed communities must be able to demonstrate that its activities directly benefit community residents of such distressed communities. The Fund does not deem activities that only provide indirect benefits to residents as evidence of an applicant’s primary mission of promoting community development (CDFI Fund 2002 Core Application Form, 22) 2

One of the ways the Fund supports the CDFI industry is through certification. Many funders, including the Fund, use certification as a way to make funding decisions. The Fund certifies CDFIs that: • have primary mission of promoting community development; • serve principally an underserved investment area or targeted population; • make loans or development investments as the predominant business activity; • provide development services (such as technical assistance or counseling); • maintain accountability to the target market; and • are non-governmental entities.


3

Strengthen and expand the network of microenterprise development organizations and promote microentrepreneurship.

The Fund allocates awards and incentives on a competitive basis to local community development financial institutions (CDFIs) such as low-income credit unions, nonprofit loan funds, microenterprise loan funds, banks, and community development venture capital funds. Please see Appendix A for a fuller description of the Fund and of the Common Data Project.

What are the Problems with the Fund’s Performance Measurement? The CDFI Fund lacks the information necessary to measure and evaluate the overall impact of its investments on community development. Performance data is imperative in order for the Fund to manage programs effectively, maintain political and financial support, and make the best use of its limited resources. The Fund needs to assess and possibly revise the financial and performance measurement tools that have been used in the past in order to maximize accurate, informative, useful, and timely data on their activities and impact. In this section and throughout the report, we will use the following terms, defined below: output (or activity) measures and impact measures. These definitions have been adapted from “Developing Methods for Measuring Impact,” a technical assistance memo written for the CDFI industry by Beth Lipson of National Community Capital. Output measures (or activity measures) measure those products and services directly provided by a CDFI and assume that benefits result from those activities. They are usually the easiest and most straightforward impact information to collect because CDFIs can compile them from their own records. Outputs include the dollar volume of loans, the number of loans, the number of accounts opened, and the hours of technical assistance provided to clients. Impact measures go a step further than outputs/activities to measure the effects of CDFI financing and other services on the borrowers and the community. We divide impact measures into two categories: 1) First level impacts include specific results of a program such as jobs created, housing units developed, and an increase in clients served by community service organizations; 2) second level impacts include broader changes in the community that help CDFIs understand how their financing fits in with broader community changes. Some indictors might include changes in property values, homeownership rates, vacancy rates, physical


4

neighborhood appearance, unemployment rates, and the development of more community services. We will use the terms “performance measurement” and “performance measures” as the Fund uses them, to refer to the negotiated goals and measures included in each awardee’s assistance agreement to which they are held accountable. These performance measures often include both output and impact type measures. Consequences of inadequate information Inadequate information about the Fund’s community development impact compromises the agency’s ability to accomplish its core mission—promoting access to capital and local economic growth. Good impact measurement informs organizational decision making, improves programs, and communicates progress to stakeholders and funders. The inability to measure impact may result in the erosion of political support. At a time when budgets are tight and the impact of each dollar spent on social programs is questioned, agencies need to demonstrate results in a clear, convincing manner. Legislators, held accountable for wise stewardship of public funds, need programs with simple, understandable measures that capture “bang for buck.” The Secretary of Treasury, in whose purview the Fund lies, also needs to understand the Fund’s impact in order to support increased funding recommendations in the President’s budget. Without information to build political will in support of this program, the likelihood that funding levels will drop as priorities change increases dramatically. Unlike some organizations, where the problem of impact measurement may mask more fundamental problems such as a lack of goals or direction, the Fund has clearly stated goals of increasing access to capital and economic growth and is taking concrete steps toward them. The CDFI industry grew out of the convergence of several social and financial factors. ◊

Changes in the financial services industry have diminished the role of banks, increased the role of non-bank financial institutions, decreased local financial intermediation strategies, and emphasized asset conformity (Pinsky, 2001, 6)

Technological developments in credit scoring, securitization, and portfolio management have made it more difficult for lenders and borrowers in unconventional markets to obtain capital (Tansey, 2001, 5).


5

â—Š

The number of locally based banks and bank branches has declined substantially (NCCA, 2000, 3).

â—Š

Government's reduced role in social services and social change has weakened safety net services and increased demand for extra-governmental solutions. This has led to the approach of government as a venture capitalist for market interventions, including CDFIs (Pinsky, 2001, 6).

A market failure clearly exists when millions of low-income individuals and businesses cannot obtain basic financial services and the Fund is helping to address this market failure by providing funds and assistance to local financial institutions that serve this population. Since the Fund was formed in 1994, the community development finance industry has grown tremendously, and many practitioners ascribe much of that growth to the Fund’s ability to provide funding, political support, legitimacy, and visibility for the field of community development finance. Just prior to creation of the Fund in 1994, there were half as many CDFIs as there are today managing one-third the assets (Pinsky, 2001, 6). As of December, 2001 the Fund has committed more than $350 million to CDFIs and has provided more than $130 million in incentives to banks. According to the director of one major CDFI trade organization, The Fund is important for at least six reasons. First, it is the largest single source of financing for CDFIs. Second, it places a priority on equity or net worth financing to strengthen CDFIs. Third, it emphasizes the use of government financing to leverage private financing. Fourth, it has raised the visibility of CDFIs enormously. Fifth, it has brought credibility to a field that was all but invisible up until 1992. Sixth, it has reinforced the benefits of private, targeted financial intermediation strategies as policy tools, opening other federal and state doors to CDFIs. (Pinsky, 2001, 30)

The problem of impact assessment is worth addressing so the Fund can continue to provide support for this industry, provide access to capital, and improve service delivery to make the best use of limited resources.


6

1998 GAO Report on Fund Performance Measurement In 1998, the General Accounting Office (GAO) released a report on the community development industry entitled, “CDFI Fund Can Improve Its Systems to Measure, Monitor, and Evaluate Awardees’ Performance.”3 The GAO report focuses on the 1996 round of awards, and our discussion will be limited to the report’s Core program findings. To determine the effectiveness of the Fund’s performance measurement activities, the GAO report analyzed the 26 assistance agreements that the Fund had closed with CDFIs by January 1998. These 26 assistance agreements contained 87 performance goals with 165 performance measures. The GAO reported that of the negotiated performance measures, approximately 79% consisted of output measures such as the number of loans made, as opposed to impact measures such as the net number of jobs created or retained.4 The GAO further found that the relationship between goals and their associated measures was often unclear, making it difficult to measure progress toward goals. One 1996 awardee’s first goal, for example, was to create substantial expanded opportunities for commercial enterprises in low wealth and/or low income communities by directing lending activities predominantly to minority-owned businesses, women-owned businesses, rural businesses, and others that serve low-income and low-wealth communities. The associated measure, however, focused on the cumulative dollar volume of new commercial loans without specifying that these loans should go to organizations serving low-income communities. In general, the GAO report criticized performance measures for not using specific units, not clearly defining terms, and lacking baseline and target values, target dates, and information about an awardee’s target population. The report suggested that the emphasis on output measures over impact measures during negotiations is attributable to two factors: (1) concerns over establishing causality between CDFI’s activities and results in the community aside from external influences beyond the awardee’s control; and (2) fear of sanctions by the Fund if impact measures are not met. 3

This report was mandated by the Riegle Community Development and Regulatory Improvement Act of 1994. This report also evaluates the Fund’s five-year strategic plan and how well this plan conforms to the provisions of the 1993 Results Act. The authorizing legislation did not provide guidance for evaluating performance measures, so the GAO applied the Result Act’s standards to the goals and measures negotiated with awardees. The GAO report provides an excellent starting point for our discussion about the Fund’s difficulties with measuring impact as well as suggestions on how to improve its monitoring activities. 4 The GAO Report uses “activity measure” for output measure and “accomplishment measure” for impact measure. The Results Act guidelines places greater significance on accomplishment measures than on activity measures.


7

This GAO report recommended that the Fund take steps to: ◊

Encourage greater reporting of impact measures based on awardees’ business plan that relate clearly and specifically to awardees’ performance measures and goals. In additional, allow outcome goals to be reported in an annual survey instead of their annual report in order to allay fears of sanction.

Implement and establish a systematic post award monitoring system to ensure that measures address all key aspects of the related goals.

Update the assistance agreements to include baseline dates and values and information on target markets.

The Fund was in general agreement with the GAO about its recommendations and has since taken steps to follow through on these recommendations. The Fund has implemented an annual survey to collect impact measures. It has tried to include more impact measures along with output measures as well as baseline dates, values and target market information in assistance agreements. However, though the Fund addressed some of the GAO’s concerns, it is still having trouble quantifying its community impact and communicating that impact to stakeholders. The performance measurement system needs further refinement in order to provide more cogent, effective impact information.


8

Policy Tensions—Why is it so difficult to measure performance? One way to look at the problem of performance measurement is to examine the tensions between competing goals. This section examines financial soundness, performance measurement, and reporting requirements and provides a picture of how the Fund’s current systems address policy tensions in each of these areas. Looking closely at these tensions will inform decisions about alternatives and, eventually, recommendations. Financial Covenants: Standardized versus Tailored

Standardized

Tailored

▲ Current Fund financial covenants appear to be standard for the entire industry. All awardees supposedly must comply with the same set of financial benchmarks. The use of standardized financial measures serves as a substitute for regulating many of the currently unregulated CDFIs, of which there are several hundred. The private banking industry is closely regulated by both federal and state agencies and laws designed to monitor and ensure the soundness of customer deposits and investments. Bank examinations are conducted monthly by regulating agencies’ examiners. In general, a bank’s financial soundness, compliance, and performance are measured by how well they rate in five categories: Community Reinvestment Act (CRA), information systems and e-banking, safety and soundness, and trust. Within these categories, banks are assessed along several components, each of which is given a score of one to five. Each major compliance category is also assigned a composite score, and the bank is rated on the basis of these scores. The banking industry has standards against which each of its regulated institutions is measured. The industry does not use simple ratios, uniform across the board. Instead, an entire institution is evaluated along several dimensions, taking into account management capacity as well as asset quality, earnings as well as liquidity. The banking industry takes a bottom-up approach to financial soundness measurement; regulators examine each aspect of a bank’s business practices and then make a determination of how each aspect fits into a whole picture of financial health. The Fund’s approach, on the other hand, is top-down; it establishes the ratios


9

that should indicate financial soundness, applies them to all funded agencies uniformly, and gives them the same weight. This top-down approach is in theory simpler and more costeffective, especially where few measures are used, and it creates an industry standard against which all organizations can be judged. This approach focuses on safety and soundness, reinforcing homogeneity (Tansey, 2001, 6). Given the Fund’s resource constraints, this standardized approach to financial soundness makes sense. However, practitioners state again and again that they value the Fund’s flexibility, and that flexibility is a hallmark of the industry. CDFIs work where markets fail, and they face special challenges in maintaining self-sufficiency, liquidity, and earnings. Practitioners feel these challenges should be accounted for on an individual basis. Low-income individuals do not fit a cookie cutter model in areas like credit history, home ownership, collateral, and capital (Tansey, 2001, 6). Tight regulation, CDFIs fear, would destroy their unique capacity to serve target markets. And in reality, the Fund tacitly recognizes these challenges by negotiating individually with those CDFIs who request an amendment of the standard measures, acting in some ways in a bottom-up manner. Therefore, financial covenants in practice are highly individualized, and we place the triangle representing current financial soundness measures much closer to the goal of being tailored than being standardized. In choosing a method for measuring financial soundness, the Fund needs to consider the resource advantages of standardizing financial measures versus the desire of CDFIs to have maximum control and flexibility in determining financial measures appropriate for their organizations. Performance Goals and Measures: Measurability versus Meaning Measurability

Meaning

â–˛

The tension between measurability and meaning in performance or impact measurement is not new, and it is not going away. Every surveyed CDFI practitioner, and most in the nonprofit field generally, struggles with measuring progress toward mission. As John Sawhill, President and CEO of the Nature Conservancy states, measuring success for a private enterprise


10

“can often be as simple as reading a profit and loss statement… data on such measures as profit, return on investment, and shareholder value creation…enable them to benchmark their performance against competitors” (Sawhill, 2000, 1). For mission-driven nonprofits, however, measuring success is much more difficult. Imagine, for example, trying to measure progress toward the mission of alleviating human suffering. Or even creating access to capital and economic growth. It should come as no surprise, then, that most nonprofits and public sector agencies, including CDFIs, find it difficult to measure impact but are very capable of tracking activity (Sawhill, 2000, 9). Activity (or output) measures monitor daily tasks and are comfortable to measure because organizations have a significant amount of control over outputs like the number of accounts opened, the number of loans approved, and the dollar volume of loans. These measures are easy to quantify, easy to explain, and easy to link back to the organization’s efforts. Activity or output measures are appropriate for some purposes. Because they monitor daily tasks, they help evaluate basic compliance with program obligations. They help keep an organization (and its managers) accountable for specific, measurable activities (Kettl, 1998, 48). Impact measurement, measuring progress toward mission, is much more difficult to get at. Although organizations are moving toward impact measurement, impacts are difficult to attribute to an organization’s specific activity and easy to misrepresent. And this is a fundamental problem for many government agencies. As Kettl (1998) states, “at the federal level, most of what managers do is to work with other managers (in other agencies, at other levels of government, in nonprofit organizations, and in the private sector) to produce a program’s results…. [T]he results of most programs are out of the control of the federal managers who manage them.” Many federal agencies grapple with this tendency to measure activities that are easy to quantify, but may not be too meaningful for impact analysis. The problem is not that agencies deliberately choose to ignore outcome measures, but technically, it is difficult to quantify such activities as “economic revitalization” or “community development,” and therefore difficult to hold any one party responsible for the results. In fact, without in depth, longitudinal studies or social experimentation with counterfactuals, it is impossible to answer the causal question. For many of these reasons, including the constraints of awardees’ performance measurement systems that track outputs, the Fund’s current performance measurement system


11

emphasizes outputs with a few quantifiable impact measures that try to get at progress toward mission. These quantifiable impact measures include the standard “job creation.” But measures like job creation are imperfect impact indicators. Which jobs count—full-time, part-time, or just hours of employment? Does job quality count? How do economic conditions influence job creation? Do we count net jobs (including jobs lost) or gross jobs? What if a firm creates jobs but subsequently goes out of business? Even if the Fund defines job creation clearly for its awardees, the most a CDFI can do is to say it helped contribute to the creation of a job (Lipson, 2000, 6). And consequently, this raises the question of accountability. Can the Fund hold an awardee accountable for helping to create jobs when there are so many other variables involved? Can the Fund itself be held accountable for job creation when so many other agencies and factors contribute to its feasibility? In choosing a method for measuring performance and impact, the Fund needs to weigh the importance of measuring impact against the relative ease and low-cost of gathering quantifiable data. As the CDP suggests, the Fund uses performance measures that are more quantifiable than meaningful, and we place the triangle representing current performance goals and measures much closer to the goal of measurability than meaning. Performance Goals and Measures: Meeting Current Constraints versus Building Long Term Capacity Data to meet current constraints

Data to build long term capacity

▲ A federal agency is never its own master, and this tension addresses the Fund’s need to both gather data to meet the constraints of other agencies and legislators and gather data to build its own long term capacity. Nonprofit organizations are obligated to report to many funders and agencies, each with their own set of requirements and objectives. In addition, nonprofits need to use data and reports to inform their own work, make program decisions, and build a foundation for their work. In general, the immediate and urgent need to satisfy funders and to generate more support drives nonprofit organizations, especially in the area of reporting. As a result, nonprofits may spend


12

much of their time and energy collecting data for the sole purpose of reporting it. They may end up gathering information that is not useful for improving their programs. Alternatively, the information may be useful, but the organization has neither time nor capacity to apply it. Data collection often revolves around meeting funders’ goals instead of measuring progress toward the organization’s mission with the result that neither party is completely satisfied. In contrast, if an agency focuses only on its data needs for long term capacity building, it may overlook funders’ immediate needs for information to its peril. It may jeopardize needed financial and political support just as essential for future growth. With limited time, agencies know that time spent gathering information about their impact means less time available to serve their communities and fulfill their missions. While measurement and data collection are undoubtedly important, spending too much time on these activities can sap an organization's resources to innovate, to recognize and take advantage of new opportunities, and to respond to changing community needs. The Fund needs to be mindful of placing a reporting burden on awardees that may inhibit their performance and ability to make progress toward their missions. Instead, the Fund might think about how to partner with awardees to help them gather data that will help improve their programs, serve their communities, and build organizational capacity. In any performance measurement system, the Fund will need to balance the need to meet current political expectations and maintain support with the goal of building long-term industry and organizational capacity. We place the triangle representing current performance reporting closer to the goal of meeting short-term political constraints. Performance Measures and Financial Soundness: Social Impact versus Financial Success Social Impact

Financial Success

▲ The CDFI industry faces a specific tension between institutions’ financial success and their ability to make a social impact. In the private banking industry, success has one dimension: sound fiscal management leads to profits. In community development finance, institutions have


13

a double bottom line motivated by their mission to provide capital to low-income, higher risk clients. By definition, this means often trading-off conventional measures of financial success in favor of creating social value in terms of wealth, businesses, and economic growth in distressed communities (Schmitt, 2001, 1). However, if CDFIs disregard their financial health and focus only on creating social value, they may find themselves unable to fulfill their mission. Unsound organizations are not able to make loans for very long before they collapse. It is in the interest of the Fund to build financial capacity in CDFIs so they can continue to serve their communities in the long-term. On the other hand, if CDFIs focus on making their organizations financially strong to the detriment of their ability to serve their target market, they are not CDFIs at all. Without their social mission, there is no reason for these organizations to exist outside of the traditional banking system. In developing a system of performance measurement, the Fund must balance financial and social returns. Its mission focuses on both building capacity of community development finance and on creating economic growth, and any performance measurement system must address both sides of the equation. The Fund must be willing to prioritize financial soundness and community impact. It must tackle questions about how much to compromise financial ratios, if at all, when an organization is clearly meeting mission goals. It must decide if higher financial risk is rewarded by higher social return on the margin. Especially because it has limited funds, the Fund must answer these questions about priority in order to decide where its funds will be the most effective, add the most value, and make the largest impact. The Fund’s reporting mechanisms currently emphasize awardees’ social impact over their financial success. Although the Fund requires awardees to submit financial indicators during the application process, an awardee’s impact, and therefore the Fund’s own impact, is measured by their activities in the community rather than by strong financial health. The Fund’s measures by and large are not about an awardee’s strengthened financial capacity or outstanding ratios. Rather, negotiated performance goals and measures focus on the services provided to the community and the impacts of those services. We place the triangle representing current performance measures and financial soundness evaluation closer to the goal of representing social impact than to the goal of measuring financial


14

success. The Fund needs to weigh the relative importance of these two goals as it refines its performance measurement system.

Criteria for evaluation The outcomes of our alternatives will be judged or evaluated based on the four following criteria: political feasibility, substantive meaning, administrative feasibility, and cost effectiveness. Political Feasibility As a Federal agency, the CDFI Fund is affected by changes in the political climate. It is imperative that alternatives do the following: ◊

Maximize funding from Congress.

Maximize support from the Treasury.

Substantive Meaning Data gathered must also have substantive meaning. Information collected should be accurate and provide meaningful measures of the Fund’s impact. Additionally, data and reporting requirements should not be so burdensome as to lead CDFIs to misrepresent their activities, accomplishments, and financial soundness. Specifically, information should: ◊

Maximize the Fund’s knowledge of CDFI impact.

Maximize reliability of the data collected.

Administrative Feasibility This is an important criteria because the Fund and CDFIs need the human resources, expertise, or technological capacity to successfully implement any policy option. If the reporting or data collection requirements for CDFIs are too time consuming, the Fund runs the risk of getting incorrect or incomplete data, and in some cases, noncompliance. It is unlikely that the Fund will implement an alternative that will drastically alter its day to day operations. Alternatives must: ◊

Minimize the time required for CDFIs to gather data.

Minimize disruption to the Fund’s current operations.


15

Cost Effectiveness It is important that alternatives not be too expensive to implement.

The potential benefits of an

alternative will not be realized if the Fund or CDFIs do not have the capital to afford it. Specifically, alternatives must: ◊

Minimize cost for CDFI data collection.

Minimize cost to the Fund.

Alternatives In this section, we describe eight alternatives for modifying the Fund’s performance evaluation and financial soundness methods to better capture information about the agency’s impact. In reviewing these alternatives, we note that they do not represent the only choices for the Fund but a look at the spectrum of options available. In many cases, elements of these alternatives can be used or combined to alter the emphasis given to the competing policy objectives discussed above, or to address other policy objectives of the Fund. Develop benchmarks to evaluate financial performance of CDFIs Alternative One Maintain current minimal use of financial benchmarks. All awardees are required to maintain the same level of financial soundness unless otherwise negotiated. The Fund currently requires all awardees to meet four financial soundness covenants (see Appendix B for explanation of these ratios): 1) Net assets ratio >= 0.25 2) Capital liquidity = 1 3) Operating liquidity = 1 4) Net revenue > 0 The four covenants give a basic picture of each organization’s health. They are currently the same for all types and sizes of funded organizations, although many awardees negotiate with the Fund to revise one or more of these covenants to better suit their particular situation. Awardees are required to submit semi-annual financial reports that show they have kept these covenants. In addition, awardees submit an audited financial statement annually.


16

How it works The CDFI Fund will continue to hold all awardees accountable to the four above covenants. However, the Fund will have to spend hours trying to negotiate individual exceptions to these covenants in the many cases (currently about two-thirds) where the covenants do not apply. Yet, the Fund will not have based the individually negotiated financial covenants on any industry benchmarks and will be subject to staff knowledge of the industry, the particular awardee, and that particular awardee’s incentive to be honest about the covenants they can reasonably achieve. The Fund will not have to spend any money to train compliance monitoring staff to look at financial benchmarks, nor will it need to update data management systems to collect or track any new data. Staff will continue to look at four basic financial indicators but not necessarily with an understanding of what the indicators say about the financial health of each awardee. Staff will, however, spend a lot of time writing amendments or memoranda for awardees that miss financial covenants. Finally, financial soundness will continue to be less important in determining compliance and impact than performance goals and measures. Awardees will continue not to focus on financial soundness covenants. Most will continue operating financially sound institutions, as it would be in their best interest to remain viable. However, this alternative might have unintended consequences. The current covenants indirectly encourage awardees to ignore other measures of financial soundness that the Fund has not required them to report. For example, the Fund application asks for other organizational financial health indicators such as the loan-to-asset ratio; deployment ratio; net income; selfsufficiency ratio; loan-to-deposit ratio; current ratio; delinquency ratio; loan loss ratio; and loan loss reserve ratio (See Appendix B for definitions). However, awardees are not asked to report on these indicators after the award has been made and the award period has begun. Some awardees might take advantage of this to let these important indicators slide. For instance, awardees could hide low deployment ratios that show very little money going out to the community. As a result, the Fund may not know if an organization is struggling. If the Fund continues to use the same four financial soundness measures, Congress might think that the Fund is not looking closely enough at financial benchmarks. The Treasury department may feel pressure to impose regulation upon the CDFI field and so destroy the


17

flexibility and uniqueness of CDFIs by turning them into traditional, regulated financial institutions. At that point, the opportunity for the Fund and the CDFI industry to have input into the financial benchmarks will be limited. Discussion Under this alternative, the CDFI Fund will not need to change its current systems or procedures, minimizing disruption to its operations. All current awardees already report on these four measures in their semi-annual financial reports, and all awardees submit financial statements to the Fund. There will be very little new controversy generated among awardees. Although some regulated CDFIs might complain about submitting redundant reports, in general, this alternative also minimizes the time awardees spend gathering and reporting financial information. However, as noted in a recent internal Fund communication, many of these covenants are already being modified in individual assistance agreements to more accurately reflect an organization’s capacity given its financial position or current activities. Out of 25 agreements examined recently, only nine had all four financial covenants set at the recommended standards. The Fund will still need to spend a significant amount of time negotiating amendments, especially if the number of awardees in its portfolio grows. This alternative thus eliminates costs associated with revising the Fund’s systems, but it does not reduce any existing costs. Because the Fund will not collect any new, ongoing financial information about awardees, this alternative will not maximize Fund knowledge of its impact on CDFIs. CDFIs will not have incentives to report less favorable information. So while the financial data collected may be reliable, it may not portray the full picture of an awardee’s financial health. Intuitively, the Fund feels that all CDFIs should meet the same financial soundness goals. However, it has no way of knowing that this is substantively true and that there are not legitimate reasons for variations across segments of the field. If the Fund fails to revise financial soundness indicators to be more effective impact measures, the worst case scenario of Treasury regulating the CDFI industry may come to pass. In this case, the Fund may achieve political support but at the price of its independence and the survival of the industry as it currently exists and operates. Alternatively, because Congress and


18

the Treasury department are more focused on performance measures, the Fund may not have to endure outside regulation, but this strategy clearly does not maximize political support. Alternative Two Add additional financial measures to the four financial covenants currently used. How it works This alternative focuses on revising the current financial covenants for awardees by including additional financial measures. As noted in Alternative One, the Fund application already asks for other organizational financial health indicators such as the loan-to-asset ratio; deployment ratio; net income; self-sufficiency ratio; loan-to-deposit ratio; current ratio; delinquency ratio; loan loss ratio; and loan loss reserve ratio. Some of this information, if also gathered during the award period, could help the Fund evaluate the direct financial impact a Fund award has had on an awardee. The Fund will need to choose a few other key indicators and require awardees to keep track of them over the course of the award period, currently five years. There are two ways that this alternative can be implemented: 1) The Fund can require awardees to meet certain benchmarks over the five year award period. Benchmarks can be set by organization type or other criteria (see Alternative Three for a discussion of this option). 2) The Fund can ask awardees to improve on these ratios over time. These two options can also be used in tandem, depending on the financial indicator in question. For example, it might be enough to require that awardees maintain positive net revenue over time while requiring improvement of the self-sufficiency ratio. Discussion Including other financial measures could improve the Fund’s substantive knowledge of CDFI activity and its own impact. And in fact, the Fund does this already with selected awardees in their assistance agreements. The loan-to-asset ratio and deployment ratio could tell the Fund if its funding has affected how much money (relative to agency size) makes it out to the community. The delinquency and loan loss ratios could tell the Fund if its award has affected


19

how well the awardee is handling its portfolio. The self-sufficiency ratio could tell the Fund if its award has increased awardee earned income. Data collected will be reliable; financial information is relatively straightforward to audit. Administratively, this alternative would not minimize time to collect information or disruption to the Fund’s operations, though requiring additional financial measures will not impose an insurmountable time or resource burden for awardees. All must submit audited financial statements; many track these ratios for their own purposes or for other regulators, so additional time might be marginal. The Fund will need to come to an agreement about which measures to add. In order to gain the most cooperation from awardees, the Fund should include them in discussions on this topic. After the Fund decides on the additional measures, it will have to revise forms and record keeping to track them. If the Fund only required additional financial information from new awardees (2003 funding round and on), it would avoid the need to amend current assistance agreements with awardees; this would save a significant amount of time and paperwork. Overall, this alternative is cost-effective, as it requires only minor adjustments by the Fund and the awardee. Politically, additional financial measures will help the Fund make a better case for impact to Congress and to the Treasury department. These measures will illustrate the concrete financial impact that the Fund has had on increasing CDFI organizational viability and increasing capital available to low-income communities. Though external factors can still affect these financial ratios and indicators, for the most part, they are more controllable by awardees than most impact measures because they have to do with internal financial management. On the other hand, the Fund will not maximize knowledge of CDFI activities nor support from Congress and the Treasury by using additional financial measures alone. Revising financial measures is important, but the Fund will still need to address concerns about impact measures. Alternative Three Develop financial benchmarks for comparable peer groups. Hold awardees to standards of each CDFI peer group. How it works Benchmarks will be used by the Fund to help them assess the financial soundness of their CORE awardeees. Benchmarks will allow Fund personnel to compare standard or "average"


20

scores on financial soundness measures to the actual performance of the CDFIs whose cases they manage. Most organizations that have made steps towards making these benchmarks divide up CDFIs by the type of institution (bank, loan fund, credit union, venture capital fund, or multibank) and asset size. The widespread assumption is that there are real differences among CDFIs when we look across these classifications. We follow these organizations' lead and construct benchmarks by type and asset size by using the Common Data Project data-- the most comprehensive information to date about the structure and behavior of the CDFI industry. We use key financial ratios often referred to by Fund personnel as ones that help indicate the financial soundness of an institution. Specifically, we examine: net income, net revenue, the self-sufficiency ratio, the net loan loss ratio, the loan loss reserve ratio, the net asset ratio, and the delinquency ratio.5 Benchmarks consist of summary statistics-- i.e. the mean, median, minimum, maximum, and range of scores on the financial soundness indicators-- arranged by type of institution (Tables 1.1-1.7) and then by asset size (Tables 2.1-2.7).

Table 1.1 NET INCOME BY TYPE BANK MEDIAN

$394,000

CREDIT UNION $10,517

MEAN

-$243,681

$41,916

$117,130

$614,382

$664,801

MIN

-$27,000,000

-$1,965,905

-$2,583,983

-$1,439,003

-$2,172,967

MAX

$22,300,000

$1,642,183

$2,297,151

$5,344,670

$3,529,979

RANGE

$49,300,000

$3,608,088

$4,881,134

$6,783,673

$5,702,946

15

191

130

15

6

OBS

5

LOAN FUND

VENTURE CAP

MULTIBANK

$40,930

$84,260

$106,430

The deployment ratio and the loan to deposit/loan to share ratio were excluded from our analysis due to large numbers of missing values across the industry.


21

Table 1.2 NET REVENUE BY TYPE BANK MEDIAN

$368,000

CREDIT UNION $33,927

MEAN

$323,656

$290,259

$1,026,975

$979,941

$1,528,702

MIN

-$754,756

-$133,313

-$1,358,629

-$1,410,003

-$106,363

MAX

$1,165,749

$6,094,118

$11,700,000

$8,841,612

$3,529,978

RANGE

$1,920,505

$6,227,431

$13,058,629

$10,251,615

$3,636,341

15

193

130

24

6

OBS

LOAN FUND

VENTURE CAP

MULTIBANK

$276,658

$41,111

$1,492,559

Table 1.3 SELF SUFFICIENCY RATIO BY TYPE BANK MEDIAN

0.927

CREDIT UNION 1.089

MEAN

0.820

1.159

0.579

0.200

0.883

MIN

-0.999

0.000

0.070

-7.366

0.157

MAX

1.382

4.262

1.485

1.748

1.156

RANGE

2.381

4.262

1.415

9.114

0.999

16

193

130

22

6

OBS

LOAN FUND

VENTURE CAP

MULTIBANK

0.488

0.512

1.093

Table 1.4 NET LOAN LOSS RATIO BY TYPE BANK MEDIAN

0.003

CREDIT UNION 0.009

MEAN

0.258

0.029

0.032

0.161

0.021

MIN

-0.004

0.000

-0.003

0.000

0.000

MAX

4.031

0.398

0.356

1.189

0.100

RANGE

4.035

0.398

0.359

1.189

0.100

16

189

130

11

6

OBS

LOAN FUND

VENTURE CAP

MULTIBANK

0.007

0.000

0.003


22

Table 1.5 LOAN LOSS RESERVE RATIO BY TYPE BANK MEDIAN

0.015

CREDIT UNION 0.031

MEAN

0.401

0.052

0.125

0.167

0.052

MIN

0.000

-0.034

0.000

0.000

0.003

MAX

6.140

0.456

0.632

0.628

0.120

RANGE

6.140

0.490

0.632

0.628

0.117

16

189

126

10

6

OBS

LOAN FUND

VENTURE CAP

MULTIBANK

0.077

0.134

0.043

Table 1.6 NET ASSET RATIO BY TYPE BANK MEDIAN

0.081

CREDIT UNION 0.095

MEAN

0.105

0.106

0.446

0.748

0.532

MIN

0.030

-0.025

-0.012

0.135

0.045

MAX

0.231

0.410

0.987

1.000

0.996

RANGE

0.201

0.435

0.999

0.865

0.951

16

193

130

25

6

OBS

LOAN FUND

VENTURE CAP

MULTIBANK

0.437

0.860

0.520

Table 1.7 DELINQUENCY RATIO BY TYPE BANK MEDIAN

1.051

CREDIT UNION 1.038

MEAN

3.549

1.069

1.102

1.160

1.091

MIN

1.000

1.000

1.000

1.000

1.000

MAX

38.442

1.470

1.470

2.206

1.254

RANGE

37.442

0.470

0.470

1.206

0.254

15

189

122

11

6

OBS

LOAN FUND

VENTURE CAP

MULTIBANK

1.050

1.008

1.060


23

Table 2.1 NET INCOME BY ASSET SIZE $0-$62,700,000 $12,416

$73,600,000-$352,000,000 $867,000

$352,000,000+ $5,116,923

-$14,020

$2,006,690

$4,576,372

MIN

-$2,172,867

-$3,727,833

$2,297,151

MAX

$2,078,636

$22,300,000

$6,315,043

RANGE

$4,251,503

$26,027,833

$4,017,892

342

12

3

MEDIAN MEAN

OBS

Table 2.2 NET REVENUE BY ASSET SIZE $0-$62,700,000 $78,440

$73,600,000-$352,000,000 $1,799,803

$352,000,000+ $12,500,000

$448,541

$2,213,985

$14,000,000

MIN

-$1,207,025

-$849,000

$11,700,000

MAX

$6,622,427

$6,094,118

$18,800,000

RANGE

$7,829,452

$6,943,118

$7,100,000

354

12

3

MEDIAN MEAN

OBS

Table 2.3 SELF SUFFICIENCY RATIO BY ASSET SIZE $0-$62,700,000 0.910

$73,600,000-$352,000,000 1.044

$352,000,000+ 1.095

MEAN

0.882

0.748

0.884

MIN

0.000

-0.999

0.359

MAX

2.830

1.155

1.196

RANGE

2.830

2.154

0.837

352

12

3

MEDIAN

OBS


24

Table 2.4 NET LOAN LOSS RATIO BY ASSET SIZE $0-$62,700,000 0.008

$73,600,000-$352,000,000 0.003

$352,000,000+ 0.006

MEAN

0.045

0.007

0.004

MIN

0.000

0.000

0.000

MAX

0.426

0.035

0.008

RANGE

0.426

0.035

0.008

337

12

3

MEDIAN

OBS

Table 2.5 LOAN LOSS RESERVE RATIO BY ASSET SIZE $0-$62,700,000 0.046

$73,600,000-$352,000,000 0.012

$352,000,000+ 0.011

MEAN

0.093

0.235

0.040

MIN

0.000

0.000

0.007

MAX

0.628

2.631

0.103

RANGE

0.628

2.631

0.096

334

12

3

MEDIAN

OBS

Table 2.6 LOAN LOSS RESERVE RATIO BY ASSET SIZE $0-$62,700,000 0.150

$73,600,000-$352,000,000 0.090

$352,000,000+ 0.182

MEAN

0.282

0.117

0.192

MIN

-0.012

0.045

0.084

MAX

1.000

0.434

0.310

RANGE

1.012

0.389

0.226

357

12

3

MEDIAN

OBS


25

Table 2.7 DELINQUENCY RATIO BY ASSET SIZE $0-$62,700,000 1.043

$73,600,000-$352,000,000 1.037

$352,000,000+ 1.050

MEAN

1.197

1.064

1.041

MIN

1.000

1.008

1.014

MAX

1.513

1.166

1.060

RANGE

0.513

0.158

0.046

329

12

3

MEDIAN

OBS

Discussion There are precedents for industry variation in financial soundness ratios. For example, Dun and Bradstreet list benchmarks for financial ratios that vary by industry. Others have also established ratios and indicators for high performers in different fields, and the NCIF, a CDFI trade industry, is exploring the possibility using benchmarks for its own members. It does not seem so far-fetched to apply this line of thinking to the awardees of the CDFI Fund. However, after examining the CDP industry data, we caution that the outcomes of using either of the sets of benchmarks that appear on the following pages will depend on how they are used. If the idea is to just give the Fund and individual CDFIs an idea of how similar organizations seem to be performing on these specific seven measures, descriptive statistics can be helpful. Benchmarks can potentially help the Fund identify institutions that might be in financial trouble before investment is put at risk. Furthermore, benchmarks can facilitate more streamlined assessment of awardees' financial health and possibly reduce the time and cost necessary to look over awardees' multiple reports. In addition, less expertise will be needed from Fund staff evaluators if they are able to trade-off their case by case appraisals for standardized indicators. On the other hand, if the Fund intends to use the benchmarks to reward and penalize its awardees, it may find that it mistreats many CDFIs. The data, as is, is too limited to be say anything definitive about comparable and meaningful groupings.6 Even if we were able to clean up the data and find some rationale for dividing CDFIs by type and asset size, we might find that


26

they do not explain the amount of variation in the financial ratios necessary to comfortably use descriptive variables as definitive standards of financial soundness. The Fund will not want to punish firms according to benchmarks drawn along these lines if a high percentage of variation is actually due to unaccounted for factors. Doing so will potentially reward mediocre firms and punish good ones – simply on the basis of things not taken into consideration by the dimensions of the benchmarks themselves. The result of this will be negative in the long term for the Fund. Its portfolio will not improve over time. In fact, the benchmarks may cause even more confusion about measurement, put investments at risk, and jeopardize the political future of the institution. CDFIs may find the benchmarks arbitrary and complain about any unfair treatment they may receive. Simultaneously, the institution of benchmarks itself may have unintended consequences on the behavior of CDFIs. While newly motivated to pay attention to ratios that they do not currently have to consider, organizations may try to merely aim for the benchmarks, tinkering with their operations to achieve them in order to avoid consequences. This can potentially threaten the social mission of CDFIs. That is, they may feel they must trade-off impact for

Sidebar 1. Limitations of CDP Data High Variance Within Variables ◊ Many more CDFIs with small rather than medium or large asset size. ◊ Dramatically different sized groupings by institution type. ◊ Highly influential outliers in distributions of financial measures. Æ Jeopardizes validity of descriptive statistics. Æ Makes it difficult to detect real differences by asset size or type. Missing Data ◊ Lack of consistent, cross-industry data on primary lending activity or target market characteristics. Æ Unable to include these factors in our analysis of financial performance for the industry as a whole. Completely Missing Variables ◊ No variables available to control for geographic area sampling methods, surveying organization, Core awardees. Æ Cannot take these factors into consideration when designing benchmarks. Please see Appendix C for a more detailed discussion of data issues.

financial soundness in order to comply with the Fund.

6

Please see Appendix C for a full discussion of important CDP data issues.


27

On the political front, the idea of benchmarks, especially if used in as part of a scorecard, will doubtless be well-received. However, sustaining this support is directly linked to the substantive meaning of the benchmarks. If it leads to mismanagement of awardees, the Department of the Treasury and others will be less likely to see it in a positive light. Improve performance measurement to better capture the Fund’s “value-added” Alternative One Continue using the same mixture of stated goals, output measures, and narratives to assess the impact of CDFIs. How It Works Currently, the Fund’s performance goals and measures cover a time period of five years from the closing date of the award. (Please see Appendix D for a description of how performance goals and measures are determined and used by the Fund). Most often, the Fund’s performance measures look at output indicators: how many loans have been closed in the last period; how many business have been assisted; how many savings accounts have been opened; how many training sessions have been conducted. Sometimes, the Fund looks at first level impact indicators: how many jobs have been created as a result of businesses assisted; how many housing units have been produced from loans to nonprofit developers. In addition, the Fund also attempts to capture second level impact measures by asking CDFIs to submit a narrative description of its activities. Once submitted, reports are merely checked against their negotiated benchmarks for compliance and filed. Discussion This alternative will not disrupt the Fund’s operational procedures. The Fund will not have to make additional investments in the data gathering process. In fact, it will actually save the costs of having to re-design its survey instrument and inform the CDFIs of changes. However, it will continue to have difficulty interpreting its measures and understanding what they really mean for impact. The CDFIs are unlikely to react much since nothing that they are currently required to do will change. They will save the costs of readjusting to a new kind of data collection system and will likely be able to gather the familiar numbers more easily with each passing year.


28

The way in which CDFI’s performance are currently measured will also lead to continued problems with communicating results to decisions makers and exacerbate the threat of deeper budget cuts. For example, simply recording the number of training sessions conducted or the hours of technical assistance provided does not give stakeholders an adequate picture of who is receiving those services, what kind of services are being provided, and the outcomes of providing those services. It does not help us assess whether the Fund’s programs actually work. Due to the fact that the Fund’s current performance measures do not demonstrate impact, political expectations will not be met. Because Congress and the Treasury will continue to be unclear about the effects of the Fund’s activities, they will be less likely to fight for larger budget allocations for the Fund, especially in light of the current administration’s funding priorities. Thus, the CDFI Fund budget is likely to continue dropping. In addition, current performance measures do not maximize the Fund’s knowledge of CDFI’s impact. Since it is difficult to capture impact indicators, current performance data is also not very reliable. As noted in the GAO report, negotiated assistance agreements do not contain enough quantitative impact measures and in most cases, measures are not directly related to goals. The Fund needs better, clearer, and easier to measure proxies for progress toward its goal of providing access to capital in distressed communities. Though job or business creation statistics are intuitively appealing, they conflate too many other factors to be useful. People with jobs and new businesses can create capital in a community, but there may be more direct measures of the Fund’s effect on capital creation. Current activity measures used by the Fund begin to get at access to capital but only scratch the surface and do not provide a deep understanding of the impact the Fund has in communities. Alternative Two Focus on fine-tuning and expanding activity and output measures to serve as proxies for impact. How it works The Fund already collects many output measures that with slight adjustments could serve as better proxies for impact. The Fund needs to capture the marginal effect of its funding, and this approach will help the Fund even if it lacks the capacity to analyze longitudinal data.


29

For instance, instead of measuring impact directly, questions such as the “increase in the number of savings account opened by low-income individuals” serve as a proxy for an area’s increase in capital. Please see sidebar (below) for examples of how this might be done. This alternative will fulfill the spirit of the GAO recommendations for impact measurement by making activity measures more substantive. Discussion In the short run, this alternative will inevitably involve a change in the Fund’s operating procedures.

Sidebar 2. Implementing Performance Measurement Alternative Two: Turning Output Measures into Impact Measures Output Measure Dollar volume of loans

Output Measure Refined to Measure Impact ◊ Dollar volume of loans to low-income people ◊ Dollar volume of loans to client rejected by traditional bank ◊ Dollar volume of loans to minorities and women

Number of business loans

◊ ◊ ◊

Business loans to start-ups Business loans to minorities Business loans to those rejected by traditional lenders

Hours of technical assistance

Number of loans closed after technical assistance was provided Delinquency rate of clients receiving technical assistance

The Fund will incur some costs designing and implementing these new measures in their data gathering process. However, the changes will make it easier for the CDFI Fund to

interpret these numbers and communicate them

Number of housing loans

meaningfully to others.

◊ ◊ ◊

Initially, CDFIs will incur additional data

gathering and reporting costs

while adjusting to the new measures expected of them.

Number of bank accounts opened

Housing loans to first-time homebuyers Housing loans to those rejected by traditional lenders Number of clients for whom housing costs have decreased as a result of buying a house Housing loans to female heads of households Housing loans in low-income census tracts Number of previously unbanked individuals opening accounts

Simultaneously, the new measures will be ones that CDFIs can more accurately collect and report. In addition, if CDFIs are able to put systematic data gathering procedures in place, this alternative will reduce their long-term measurement costs.


30

On one hand, politicians, who like to talk in terms of jobs created or maintained, will not be fully satisfied by these more measurable but slightly less flashy measures. However, stakeholders will be able to quantify the effects of the Fund's activities. This will lead to more concrete support of the Fund. This new measurement technique will also provide the Fund with more meaningful information about the impact of CDFI’s activities. It is relatively easy for CDFIs to collect activity and output information, thus increasing the response rate, reliability, and integrity of the information collected. This alternative will be cost effective over the long haul. This alternative will allow CDFIs to collect easier to measure data about outcomes they can control. The Fund's resources can be spent on developing ways to analyze and apply the reliable data collected rather than on interpreting complicated impact measures. Alternative Three Contract to a third party research organization to conduct either in-depth case studies of specific CDFIs, longitudinal studies for a specific region, or scientific experiments with a control group. How it works Instead of trying to measure impact for all CDFIs, the Fund will contract with a third party research organization to conduct an impact study. Three possible studies are: in-depth case studies of specific CDFIs, longitudinal studies for a specific region, or scientific experiments with a control group. The studies will focus on two areas—the awardee as well as the awardee’s clients and community. To be effective and meaningful, these studies will take time to design, conduct, and analyze. The Fund will have to make a long term commitment to this endeavor with both human and capital resources. Similarly, results will not be immediate. However, at the end of the study, the Fund will have comprehensive information about the long term effects of its funding in specific markets. Case studies are the most narrow of the three types of studies to conduct and are potentially less costly. For such a study, cases will have to be chosen carefully so they represent either "typical" CDFIs or a range of representative CDFIs. The Fund may choose to conduct a few case studies simultaneously so that results can be compared.


31

Longitudinal studies involve looking at a specific region and documenting the changes in the community and the effects of a CDFI’s activities over a period of time. Like case studies, the region of interest will have to be chosen carefully to ensure that it is not too dissimilar to other regions. This type of study is much more involved than case studies; therefore, it may not be feasible for the Fund to commission more than one or two longitudinal studies simultaneously. Scientific experiments are the most complicated of the three types of studies and involve high levels of resources and expertise. The idea behind a scientific experiment is to attempt to determine causality with the use of a control group and an experimental group. The researcher will have to choose two areas with similar economic and demographic characteristics. The experimental area will have a CDFI with a Core awardee operating within its boundaries. The control area will not. The researcher will then document the changes in the community; in this way, the researcher can begin to tease out the effects of a CDFI’s activities. Although social science experiments are less accurate than natural science experiments because all external influences cannot be controlled, it can, when executed properly, come closer to determining the true cause of a particular outcome than either a longitudinal or case study. Discussion These studies will not disrupt the Fund’s operations as they will be conducted by third party contractors. For CDFIs involved in the study, this research may increase their reporting burdens during the course of the study. Researchers will help bear some of this burden, however, and the participating awardees will benefit from the results. The rest of the awardees will not be affected. However, conducting any type of social experiment at such a macro level involves enormous amounts of time and resources. For example, the NRC is currently conducting pilot case studies; it has given $500,000 grants to participating organizations to defray the costs of conducting the study. The Fund will need to partner with private foundations or research institutes interested in the topic in order to make this work. Fortunately, evaluation of the CDFI field is a popular topic, and many foundations are participating in and funding similar projects. The Vermont Development Credit Union (VDCU) and Coastal Enterprises, Incorporated of Maine (CEI) are other examples. Both CDFIs are currently conducting longitudinal impact studies that begin to establish causal links between their programs and community impacts. Both


32

awardees employ independent research institutes and use foundation dollars to survey members and loan recipients about the effects of their services on people’s lives and in the community. The Ford Foundation has funded CEI’s study for $520,000 over three years. Initial findings are promising, showing that the organizations’ activities and even second tier effects like “greater community involvement” may actually be correlated. Even though these studies have made significant headway, however, even they have not been able to prove causality. These studies can result in useful, rich information about CDFIs, their activities, and their long-term community impact. The studies will be useful not only for informing the Fund but also for educating policymakers and decision makers in Congress and the Department of Treasury. And while data collected may not prove causality, it will be reliable if it has been collected over several years and there is basis for comparison. However, the cost of these studies may make them look unattractive to policymakers if the cost is a significant fraction of the agency’s budget because it takes away from their primary funding activities. There are both merits and drawbacks to conducting case studies, longitudinal studies, or scientific experiments. The benefits of conducting case studies are that they are more narrowly focused compared to a longitudinal study or scientific experiment. Case studies can provide

Sidebar 3. Exemplary Impact Studies Coastal Enterprises, Inc. of Maine--Low Income Longitudinal Study How it works: CEI has agreements with their borrowers that requires them to turn in a form each month with names of employees, hires, fires, ethnicities, and income levels. This information is stored in their Social Information System (SIS) database. In addition, CEI is studying what happens to people over the long term who get jobs in funded businesses. They conduct 45 minute interviews with participants. CEI also tracks people using state wage and unemployment data at the individual level. A third party survey research center at a university conducts the interviews and research. Vermont Development Credit Union Longitudinal Study How it works: In partnership with the University of Vermont, VDCU conducted a study to examine the question of how a relationship with the credit union changes lives over time; whether membership in the credit union builds wealth; how the credit union promotes community development; the social return on investment from VDCU's inputs. Components of the study included a postcard survey of members, focus groups, telephone interviews of a sample of members, web-based surveys, and paper based surveys. The survey focused on primary impacts (increased savings, paying off debts) as well as secondary and tertiary impacts (increased property values, increased school attendance, increased political action, involvement in community). VDCU also designed an ACCESS database to capture longitudinal data on members, savings, and lending.


33

insight into best (or worst) practices and tease out specific indicators that can be used to measure impact. The drawbacks to conducting case studies are that they do adequately rule out external factors, and results may not be generalized to the industry as a whole. Longitudinal studies can provide the Fund with richer information regarding the impacts of a CDFI’s activities. Since longitudinal studies are conducted over a period of time, both before and after the presence of a CDFI, it can better determine the effects of a CDFI’s activities. The drawback to a longitudinal study is that it cannot be generalized to the entire CDFI industry. Although scientific experiments are the most complicated and costly to conduct, it is arguably the best method to determine the impact of a CDFI’s activities. If conducted carefully, the results of these experiments can also be generalized across the CDFI industry. Although this form of study has the most potential, as with most social science experiments, it is also the most difficult to conduct and involves a vast number of uncontrollable and unforeseen variables, even under the best conditions. As such, results may be not as unequivocal as hoped and therefore this type of study will involve the most risk to the Fund’s investment. Modify Report Mechanisms Alternative One Continue to require CDFIs to submit several different reports for financial soundness and impact measures. How it works Currently, the Fund requires its Core awardees to submit 4 reports each year: two (2) semi-annual financial reports covering 6 and then 12 months of data; the official audited financial statements with auditor’s opinion; and a FYE annual report. The annual report includes a narrative on activities undertaken during the year and reports on progress toward their goals and measures. Progress is rated Outstanding, Good, Satisfactory, Below Expectations, or Unacceptable. In addition, the Fund has conducted an annual survey for the past few years that collects financial information as well as output and outcome measures. Measures include: jobs created and maintained; number of businesses assisted; number of affordable housing units rehabilitated or developed; number of community facilities developed; types of technical assistance provided; and number of clients opening checking and savings accounts. In 2000, the Fund began working


34

with key stakeholders in the CDFI community on the Common Data Project (CDP) to collect standardized industry data. The CDP annual survey will likely replace the Fund’s own annual survey as it did for FY2000 data collection. Several issues need to be addressed in the area of reporting. Except for annual survey (CDP) data, which is now entered into a Microsoft Access database, the Fund does not keep track of data collected in its semi-annual or annual reports in any database. The reports are used to record compliance. If the Fund enters data from these reports in a database along with baseline and organizational information, data can prove infinitely more useful. The Fund can look at progress toward benchmarks over time for individual awardees or funding years and better understand the progress that they are making in the field. The Fund can analyze improvements in different fields and target markets and use the information to confirm or revise funding priorities. The Fund does not currently have the information management systems in place to make use of all this data. Finally, the Fund does not currently audit financial or annual report findings. Staff usually conducts site visits during the award closing process, but the Fund lacks resources to conduct post award monitoring visits. Site visits that allow Fund staff to view programs and activities and review financial and program information with backups would give the Fund an opportunity to verify the content of submitted reports. Discussion This alternative does not disrupt the Fund’s day to day operations. By allowing present collection procedures to continue, the Fund does not have to alter the way it collects and process these reports. Current data collection and storage methods make it difficult for the Fund to quickly gather specific information about individual CDFIs and to monitor or track changes in an awardee’s performance and financial condition. For CDFIs, having to submit several reports a year may take up much of their resources and may result in incomplete or inaccurate reports. For regulated CDFIs, submitting semiannual financial reports may be redundant and time consuming because the Fund’s financial standards differ from other regulatory agencies. Regulated CDFIs may find complying with two separate sets of standards less than ideal. It is worth noting, however, that the Fund has taken steps to streamline its reporting requirements. Prior to this current structure, CDFIs were


35

required to submit quarterly performance reports. Due to awardee complaints and feedback, the Fund has modified this requirement to the current reporting system. One advantage for the Fund of separating annual performance reports from annual surveys, however, is that CDFIs may feel uninhibited to report more detailed information about their activities in annual surveys because they are not evaluated for compliance. The Fund may find that information gathered in performance reports differs slightly from surveys due to fear of sanctions. Information can then be compared for accuracy. The political feasibility criteria do not apply directly to reporting requirements. The extent to which the Fund’s current level of reporting meets political expectations depends upon the content of the reports as well as the way the information is used. It is up to the Fund to determine the appropriate reporting requirements in order to effectively measure outcomes and impact. The Fund’s current reporting structure requires it to evaluate several performance and financial reports a year for each awardee as well as participate in the CDP survey. The Fund has to ensure that questions in the surveys and the performance reports do not overlap too much and that performance measures accurately reflect performance goals. In addition, the Fund also has to police CDFIs and ensure that reports are submitted on time. The greater the number of reports, the more difficult it will be for the Fund to monitor compliance and evaluate content. The Fund is required to fulfill all these duties with staffing, training, expertise, and technological limitations. The Fund has some redundancy in its reporting requirements by requiring both an annual survey and annual performance report and by replicating the work of other regulatory agencies. As it stands, the Fund’s current reporting requirements as well as its method for processing and evaluating these reports and surveys are not cost-effective. Alternative Two Require only non-regulated CDFIs to report on their financial soundness, and ask all awardees to report on impact measures. How it works Under this alternative, the Fund will only require non-regulated CDFIs to report on financial soundness, but ask that all awardees report on impact measures. The Fund, in other words, will not replicate the work of other regulatory agencies. For regulated CDFIs, the Fund


36

will rely on the financial soundness standards of other regulatory agencies. The Fund will not actively monitor the financial integrity of regulated CDFIs. This will lighten the administrative burden on the Fund. However, the Fund will have to coordinate with other agencies to receive reports and share information. At the same time, the Fund will not have discretion in determining financial soundness standards for regulated CDFIs and therefore have to trust that these CDFIs will comply with approved standards. Discussion As mentioned above, this alternative will lighten the Fund’s administrative burden. Since regulated CDFIs such as credit unions make up a large proportion of their awardees, this alternative will have a significant impact on their day to day operations. The impact, however, will be a positive one. It will decrease the paper flow to the Fund so the Fund can reallocate its resources to other activities. For regulated CDFIs, this new procedure will reduce redundancy in their reporting requirements. Perhaps this will give regulated CDFIs more time to complete performance reports and do a more thorough and complete job. This alternative will not affect non-regulated CDFIs in any direct way. The outcome of this alternative may not be political feasible if it is viewed by the Treasury or Congress as a method for the Fund to unload its responsibilities onto another federal agency. There is no federal statute requiring the Fund to monitor the financial soundness of its awardees, it is simply good management practice. The Fund will have to convince the Treasury and Congress that it is not disregarding its responsibilities, but streamlining its operating procedures. In addition, it will also have to demonstrate to the Treasury and Congress that other regulatory agency’s financial soundness measures are in-line with the Fund’s own standards and that there is an adequate system of information sharing between the Fund and other regulatory agencies. Indirectly, this alternative will be cost effective. By freeing up resources, the Fund can improve its operating procedures to make the best use of staff time. What the Fund gives up in discretion over financial soundness standards, it will gain in added resources and time.


37

Comparing the Alternatives Alternatives

Financial Alternatives 1. Maintain current minimal use of financial benchmarks. 2. Use additional financial measures 3. Develop financial benchmarks for peer groups Performance Measurement Alternatives 1. Use the same mix of output and impact measures to assess Fund impact 2. Fine-tune and expand activity measures to serve as proxies for impact 3. Third parties conduct case studies, longitudinal studies, or experiments Reporting Alternatives 1. Continue to require different reports for financial soundness and impact measures. 2. Require non-regulated CDFIs to report on financial soundness; all awardees report on impact measures.

Administrative Feasibility Minimize Minimize time for disruption to CDFIs to Fund collect data

Cost-Effectiveness Minimize cost Minimize cost for CDFI data to Fund collection

Political Expectations Maximize Maximize funding from support from Congress Treasury

Substantive Meaning Maximize Fund Maximize knowledge of reliability of data CDFI activities collected

+ +

+ +

+ +

+ +

_

_

_ _

+

+

_

0

_

+

+

+ +

+ +

_

_ _

0

_

+

+

0

0

+ +

+ +

_

_

_ _

_ _

_

_

+

+

+

+ +

+

+

+ +

+ +

_

0

0

_ _

+

+

+ +

+

_

+ +

_

0

0

0

+

+

+

+

+

+

_

_

_

_


38

Recommendations Weighting the Criteria Any recommendation we make must be politically feasible. If alternatives fail to demonstrate results to legislators and administration officials, the Fund will continue to be plagued by lack of political and thus financial support. Thus, it would be irresponsible to implement any plan that does not address the needs of the Fund's political constituency. Only slightly less important is substantive meaning. New policies must increase the value of the data for understanding the impact of the Fund's programs on the CDFI industry. Unless alternatives improve current knowledge of CDFI activities, it would be pointless to implement them even if they meet our other criteria. Alternatives must be cost effective in order to be administratively feasible, so we weight these two criteria equally. Recommendations will meet these two criteria or chances of implementation will be slim. Financial Soundness Recommendation We recommend using additional measures to better evaluate CDFIs' degree of financial soundness. This recommendation is politically feasible; it demonstrates to Congress and to the Department of Treasury that the Fund is concerned that the kinds of institutions that it supports are viable and have the financial capacity to make an impact in the community. Additional financial measures have the potential to help the Fund demonstrate their impact on building the capacity of CDFIs in order to increase access to capital. This recommendation increases the Fund's knowledge in several ways. The Fund will be able to evaluate its impact on awardees' financial soundness from different angles using additional financial measures. The Fund's picture of awardees' financial health will be more reliable as more indicators are reported and analyzed. In addition, this recommendation is relatively painless to implement, particularly if the Fund begins with its current or next round of awardees. The Fund will incur only minor costs. CDFIs already track many financial indicators for their own use as well as for other regulators, and reporting these indicators to the Fund will not be costly or time-consuming.


39

This recommendation addresses the policy tension between social impact and financial success by allowing the Fund to give slightly more weight to an awardee's financial soundness, achieving a better balance between the two goals. This recommendation does not address the tension between standardized and tailored financial measures. Performance Measurement Recommendation We recommend that the Fund fine-tune output measures to be better proxies for community impact. Output measures are quantifiable, easy to understand, and easy to communicate. With the addition of specific demographic or socioeconomic information, output measures can be powerful yet simple indicators of CDFI and Fund impact. Politically, this recommendation will improve the Fund's ability to demonstrate impact to Congress and Treasury. This recommendation also improves the Fund's substantive understanding of impact without sacrificing data reliability. Output measures also focus on activities that CDFIs can control, increasing the Fund's ability to trace its impact in the field. Because this recommendation focuses on easier-to-measure outputs instead of more ambiguous impacts, it minimizes the time and costs required to collect and to analyze data. This approach will not significantly increase the reporting burden for CDFIs while it provides a wealth of information for both the industry and for the Fund. In the long run, we recommend that the Fund partner with a research institution to conduct in-depth studies of its impact on the CDFI industry. Although this may not be administratively feasible or cost-effective in the short run, this recommendation is the most effective means to determine the Fund's value-added. Only in-depth or long-term studies can effectively capture community impact by looking at quantitative as well as qualitative data. These studies will allow the Fund to use counterfactuals to get at the "but for" question. By initiating this process now the Fund will be able to get past current constraints to build long-term capacity. This recommendation improves the balance between the measurability versus meaning tension. Using the appropriate proxies for impact allows CDFIs to gather more accurate data while achieving a greater level of meaning. Similarly, this recommendation also improves the balance between the meeting current constraints versus building long term capacity tension. Implementing this recommendation will allow the Fund and CDFIs to collect data that satisfies


40

political stakeholders. At the same time, simplifying impact measurement while increasing data meaning and reliability will allow the Fund to direct greater resources toward its mission. It will be able to spend more time and money on the long-term goals of improving programs and expanding the ability of the CDFI industry to provide capital to distressed communities. Reporting Recommendation We recommend letting present trends continue. The Fund should continue to require all CDFIs to submit both financial and performance reports. In addition, separating financial and performance reports from the annual survey will increase the reliability of collected data by reducing the incentive for awardees to inflate impact to avoid sanctions. While requiring several reports, including financial reports from regulated institutions, will maintain higher reporting costs for CDFIs, we believe that these costs will be minimal when weighed against the benefits of more complete, reliable data. Because this recommendation is procedural, we do not believe it will have any effect on the Fund's political support. It is more important to focus on the content of the reports rather than the frequency with which reports are submitted. Data Management Recommendation Throughout this report, we have discussed the Fund's lack of information, but we believe effective management of this information is essential. It does not help to require more financial and performance data if the Fund lacks the capacity to store, analyze, and use it. The Fund already has a great deal of information that it collects in semi-annual financial reports, annual performance reports, and surveys. However, it does not make good use of this data. As it stands, when reports are submitted, the Fund staff examines them solely for compliance or non-compliance. Reports are not examined for commonalities or trends. Specific output or outcome data are not recorded. We recommend that the Fund make greater use of these reports and input the information into a comprehensive database. Fund staff should also use the narratives it collects to compile anecdotal and qualitative evidence on CDFI impact. These narratives provide a more textured representation of CDFI activities. This will compliment the quantitative impact data.


41

Initially, the Fund will have to spend time creating an effective data management system. The Fund may need to make a large investment in a new, comprehensive database, or the Fund may choose to make better use of its existing systems. The Neighborhood Reinvestment Corporation (NRC) provides a good example

Sidebar 4. Exemplary Data Management: Neighborhood Reinvestment Corporation

of how report data can be integrated into impact measurement with the help of innovative data management systems. NRC created its own database to track disaggregated data, and many of its awardees use their system (see sidebar for details). If the Fund implements such a system, a large capital investment will be needed, and additional staff will be hired simply for data management. The Fund will also need to train staff to clean and analyze the data collected. It will need to market the database product to CDFIs and help them install and use it. All of these things take huge amounts of time and resources. The Fund does not have to follow this model exactly. Individual CDFIs do not necessarily need to use the Fund’s database, though smaller and start-up CDFIs we have spoken with need help with data management and would benefit from

Organization: Neighborhood Reinvestment Corporation (NRC), Federal Agency created in 1978 Purpose: Revitalize older urban neighborhoods by mobilizing public, private, and community resources at the neighborhood level. Awardees: About 200 total, including roughly 100 CDFIs that engage in housing lending, creation, and rehabilitation. Technique: The NRC developed its own database (NWORKS2000) that is used by about half of its 200 awardees to record activities, client demographics, and other disaggregated data. When it is time to file reports, awardees pull up a report on their own database and submit it electronically to the NRC’s headquarters. Three staff members at NRC spend about two weeks cleaning the surveys and then import the data into the master database. The information is ready to extract and use. NRC is currently looking into a web-based platform to simplify the process further. The NRC conducts training on the database for their awardees and has consultants to help them with technical difficulties. The database is provided free of charge to all interested awardees.

technical assistance of this kind. The centerpiece of this plan is really to move the Fund toward keeping better track of the information it gathers already, so the development of the Fund’s own system is vital. The ability of CDFIs to record and submit data electronically will, of course, be helpful in achieving this end.


42

Sidebar 5. The California Wellness Foundation—Using Grantee Reports in Evaluation Guiding Principles for Evaluation • Evaluation promotes key foundation goals such as the sustainability of community organizations by informing organization and foundation practices. • Evaluation emphasizes skill building among grantees for self-assessment and continuous feedback for program improvement. • Evaluation helps grantees set realistic goals and outcomes for dollars received. • Evaluation creates a true learning community where institutional culture supports genuine inquiry rather than fear of judgment. • Evaluation balances ultimate utility with costs incurred, including the time and energy required by grantees. Current Evaluation Practices • Focus on in-depth, qualitative description of the work, ongoing feedback, outcome measures related to organizational capacity as intermediate steps to improving program quality. • Brief narratives submitted by grantees as part of each grant closeout and are summarized by program staff for the foundation’s board of directors. Narratives focus on organizational learning during the grant period; essential questions include accomplishments, challenges, changes to proposed activities, and lessons learned. • The board closeout form explicitly focuses on implications and lessons for future grant making. • Each grantee is rated from 1 to 5 on how well they meet their objectives, the foundation's expectations, and foundation requirements. • The foundation also undertakes cluster evaluations of groups of grants made for a similar purpose.

Greater efficiency and data storage capacity will help the Fund and its awardees make use of existing information to inform program-related decisions. The Fund will be able to better assess compliance issues and report submission problems. Both awardees and the Fund will benefit with an improved system of data management and evaluation. Through more efficient use of data, the Fund can more accurately demonstrate impact and therefore meet political expectations. Its reports to Congress and the Treasury will be more robust and informative. This alternative will be costly both in terms of time and financial resources, especially at start-up. A new, comprehensive data management system with electronic platforms for CDFIs will be expensive to create and implement. However, in the long run, the investment made now will save the Fund time when retrieving and updating information. It will create a repository of information that can be analyzed longitudinally, one of the only ways to truly measure the impact of its activities. Effective data management increases the Fund's long-term organizational capacity and builds support from legislators and officials. It may also better measure awardees' progress and build industry knowledge. In time, the Fund will have more resources for mission-related activities and will be better able to support the CDFI industry.


43

Conclusion Without question, the CDFI Fund plays a critical role by supporting the organizations that provide access to capital for distressed communities. In order to continue their efforts, the Fund must implement more effective data gathering strategies that will maximize political support, substantive meaning, and cost efficiency. Their mission depends on it. Our recommendations will help the Fund accomplish both long and short term goals. Adding supplementary financial measures will allow the Fund to better evaluate the financial soundness of its CDFIs in the short term and will track the Fund's progress toward its goal of strengthening these institutions in the long term. Refining output measures to better capture impact will provide easily understandable and verifiable information today. The case studies or longitudinal studies will better isolate the effect of Fund dollars on the wellbeing of communities and local economic growth. Good data management will bring these components together. The best measures of organizational impact are simple, strategic, and address progress toward mission (Sawhill, 2000, 18). We believe our recommendations meet these criteria and urge the Fund to implement them.


44

References Andrews, Nancy O. "Equity with a Twist: The Changing Capital Needs of the Community Development Field." A Capital Exchange Journal Article prepared for The Brookings Institution’s Center on Urban and Metropolitan Policy, April 2001. Bartik, Timothy J. and Richard D. Bingham. "Can Economic Development Programs Be Evaluated?" Upjohn Institute Staff Working Paper. Kalamazoo, MI: W.E. Upjohn Institute for Employment Research, 1994. 95-129. Bhatt, Nitin and Shui-Yan Tang. "Making Microcredit Work in the United States: Social, Financial, and Administrative Dimensions." Economic Development Quarterly 5.3 (August 2001): 229-241. Beauregard, Robert A. "The Employment Fulcrum: Evaluating Local Economic Performance." Economic Development Quarterly 13.1 (February 1999): 8-14. Boarnet, Marlon G. "Enterprise Zones and Job Creation: Linking Evaluation and Practice." Economic Development Quarterly 15.3 (August 2001): 242-254. Buss, Terry F. and Laura C. Yancer. "Cost-Benefit Analysis: A Normative Perspective." Economic Development Quarterly 13.1 (February 1999): 29-37. Caskey, John P. "Bringing Unbanked Households Into the Banking System." Capital Exchange Journal Article prepared for The Brookings Institution’s Center on Urban and Metropolitan Policy, January 2002. “Community Development Financial Institutions (CDFIs): Bridges Between Capital and Communities in Need.” Philadelphia PA: National Community Capital Association, 2000. Felsenstein, Daniel and Joseph Persky. "When Is a Cost Really A Benefit? Local Welfare Effects and Employment Creation in the Evaluation of Economic Development Programs." Economic Development Quarterly 13.1 (February 1999): 46-54. http://www.fdic.gov/regulations/index.html http://www.federalreserve.gov/regnsup.htm Ihlanfeldt, Keith R. and David L. Sjoquist. "Conducting an Analysis of Georgia's Economic Development Tax Incentive Program." Economic Development Quarterly 15.3 (August 2001): 217-228. Jenkins, Noah Temaner and Michael I.J. Bennett. "Toward an Empowerment Zone Evaluation." Economic Development Quarterly 13.1 (February 1999): 23-28.


45

Kolodinsky, Jane; Sue Holmberg; Caryl Stewart; Antonia Bullard. “Vermont Development Credit Union: A Community Program that Works.” Prepared for Pew Partnership for Civic Change, Solutions for America, 2002. Lipson, Beth. "Developing Methods for Measuring Impact." Technical Assistance Memo. Philadelphia, PA: National Community Capital Association, 2000. ___________. "CDFIs Side by Side: A Comparative Guide." Philadelphia, PA: National Community Capital Association, 2000. Martin, Garrett. "From the Field: Program Monitoring at The Enterprise Corporation of the Delta." Enterprise Corporation of the Delta, 2000. ___________. "From the Field: A Practitioner's Perspective on Developments in Program Monitoring." Enterprise Corporation of the Delta, 1999. ___________. “Enterprise Corporation of the Delta: 2000 Annual Survey Report,” Enterprise Corporation of the Delta, 2000. Nowak, Jeremy. "Civic Lesson: How CDFIs Can Apply Market Realities to Poverty Alleviation." A Capital Exchange Journal Article prepared for The Brookings Institution’s Center on Urban and Metropolitan Policy, March 2001. Pinsky, Mark. "Taking Stock: CDFIs Look Ahead After 25 Years of Community Development Finance." A Capital Exchange Journal Article prepared for The Brookings Institution’s Center on Urban and Metropolitan Policy, December 2001. Raynor, Jared. "Credit Union CDFI CORE Awardee Impact Analysis: Final Report." NFCDCU (National Federation of Community Development Credit Unions), August 2001. Reese, Laura A. and David Fasenfest. "Critical Perspectives on Local Development Policy Evaluation." Economic Development Quarterly 13.1 (February 1999): 3-7. Sawhill, John C.; Stephen C. Howell, and David Williamson. "Mission Impossible? Measuring Success in Nonprofit Organizations." Unpublished draft, The Nature Conservancy, 2000. Schmitt, Brian. "Measuring Social Returns in Community Development Venture Capital." A Report for the Double Bottom Line Meeting. Unpublished paper, The Community Development Venture Capital Alliance, May 2001. Tansey, Charles D. "Community Development Credit Unions: An Emerging Player in Low Income Communities." A Capital Exchange Journal Article prepared for The Brookings Institution’s Center on Urban and Metropolitan Policy, September 2001.


46

Tao, Jill L. and Richard C. Feiock. "Directing Benefits to Need: Evaluating the Distributive Consequences of Urban Economic Development." Economic Development Quarterly 13.1 (February 1999): 55-56. Turner, Robyne S. "Entrepreneurial Neighborhood Initiatives: Political Capital in Community Development." Economic Development Quarterly 13.1 (February 1999): 15-22.


Appendix A: CDFI Industry Overview and Common Data Project Background What is a CDFI? Community Development Financial Institutions (CDFIs) are financial institutions that invest in individuals, small businesses, quality affordable housing, and community services that benefit economically disadvantaged people and communities (NCCA, 3). Why CDFIs? Why are they needed? CDFIs operate on the premise that economically disadvantaged people and communities need access to capital to start and expand businesses, to build and purchase homes, and to provide needed community services such as child care and clinics for a healthy community and economy. CDFIs work where mainstream financial institutions do not. In recent decades, financial institutions have streamlined operations and closed locally based banks and bank branches. Branches in low-income areas have substantially declined. Even when these branches do exist, many low- to moderate-income people do not qualify for personal or business loans. It is also difficult for nonprofit institutions to obtain loans for affordable housing and other projects. CDFIs try to bridge this gap in capital. Types of CDFIs • Depository CDFIs (Community Development Banks and Community Development Credit Unions). These institutions take deposits, offer financial transaction services, and make loans. Some provide financial education services. • Non-depository CDFIs (Community Development Loan Funds and Community Development Venture Capital Funds). Loan funds raise capital from institutional and individual investors and provide primarily debt financing. There are hundreds of loan funds including many that have a specific focus on housing, business, microenterprise, and community facilities. Venture capital funds raise funds principally from institutional investors and provide both venture (equity) financing and debt financing. History Community Development Financial Institutions (CDFIs) have their roots in the beginnings of credit unions in the early 1900s. During the civil rights movement and the War on Poverty, the field of community development finance was grounded in the idea that a significant proportion of U.S. citizens, including many ethnic minorities, was excluded from the economic mainstream and from real opportunities to create wealth and opportunities (Pinsky, 26; NCCA, 7). During the Johnson administration, community development corporations (CDCs) as a community-based vehicle for targeted economic revitalization. CDCs were the real predecessors of CDFIs. In the 1970s and 1980s, CDFIs arose to organize financial capital from private sources in order to fund activities of community-based organizations. At this early stage, religious institutions played a major role in supporting community finance.


48

In the 1990s, CDFIs grew dramatically. The Community Reinvestment Act was passed in 1977 to require federally insured depository institutions to help meet the credit needs of entire communities, including low- and moderate-income areas and individuals. However, CRA enforcement was spotty until the last decade. In addition, 1995 saw the revision of CRA regulations to explicitly recognize loans and investments in CDFIs as qualified CRA activity. In total, CDFIs now manage more than $6 billion in assets. The Common Data Project The Common Data Project (CDP) is an endeavor by the Fund and nine other national CDFI trade organizations to create a common platform for data collection and management and to create an enduring system that will strengthen the community development finance field and the institutions within it. This multi year collaboration should also help minimize data reporting requirements for CDFIs and help build the capacity of CDP members. The survey includes questions about financial performance as well as community impact. The survey was distributed to CDFIs in 2001. Each participating trade organization was in charge of their segment of the CDFI population. Surveys all had a common set of core questions, and trade organizations added questions where they had particular interests. The survey was completed by 379 CDFIs based on year 2000 activities, a response rate of 82 percent.

Common Data Project Participants Association for Enterprise Opportunity (AEO) Aspen Institute Community Development Venture Capital Association (CDVCA) CDFI Fund CDFI Coalition Corporation for Enterprise Development (CFED) Ford Foundation MacArthur Foundation National Community Capital Association (NCCA) National Congress for Community Economic Development (NCCED) National Community Investment Fund (NCIF) National Federation of Community Development Credit Unions (NFCDCU) Neighborhood Reinvestment Corporation (NRC) Woodstock Institute


Appendix B: Explanation of Financial Ratios In addition to submitting unaudited and audited financial statements with auditor’s notes and explaining qualified opinions, there are four financial soundness covenants that all CDFIs regardless of type, size, market, age, or region must meet. Awardees must report on these covenants semi-annually. The current covenants are: 1. Net Assets/Total Assets=0.25 The Net Asset Ratio serves as an indicator of the underlying financial strength of a nonprofit organization’s equity base relative to its total assets and whether it has sufficient equity to cover unexpected losses. 2. Operating Liquidity=1 The Operating Liquidity Ratio (cash+cash equivalents+marketable securities / 25% of last FY operating liabilities) is a measure of the extent to which an organization has sufficient operating reserves on hand to pay its expenses. A ratio of 1.0 or greater means that an organization has at least 3 months of liquid assets on hand to cover 3 months of expenses (25% is 3 months of the year’s activities). 3. Capital Liquidity=1 Capital Liquidity Ratio (cash+cash equivalents+marketable securities +25% of current loans receivable / current liabilities) is similar to the Operating Liquidity Ratio but measures the extent to which an organization has sufficient liquid capital to cover its current liabilities. 4. Net Revenue>0 Net revenue (Total Revenue-Operating Expenses) measures whether an organization has the resources to pay its annual operating expenses. Trends in Net Revenue (or Net Income) are an indicator of the financial solvency of an organization. The Fund looks at other ratios when making funding decisions. Annual Net Loan Loss Ratio: Represents the portion of an organization's Total Outstanding Loan Portfolio that is so delinquent that it has been deemed uncollectable and assumed to be a loss. Current Ratio: Current assets divided by Current liabilities. Delinquency Ratio: The total dollar amount of loans with payments 30 days or more past due divided by the Total Outstanding Loan Portfolio (or Total Loans, in the case of banks or thrifts). This ratio is measured in different ways for different organizations. Noregulated institutions have an aging schedule of 30, 60, and 90 days. Insured Credit Unions have an aging schedule of 2 months, 6 months, and 12 months. Banks and thrifts have a schedule of 31, 61, and 91 or more days. The Delinquency Ratio is also commonly known as the Portfolio-at-Risk.


50

Deployment Ratio: Indicates the extent to which debt capital and equity capital (both restricted and unrestricted) are actually deployed in loans or equity investments. Serves as an indicator of how aggressive an organization has been in using available capital. Equity Investment-to-Asset Ratio: An indicator of the extent to shich an organization's assets are available for making equity investments and how aggressive an organization has been in investing available capital. Loan Loss Reserves: Funds set aside in the form of cash reserves or through accountingbased accrual services that serve as a cushion to protect an organization against potential future losses. Loan Loss Reserve Ratio: Loan loss reserves divided by Total Outstanding Loan Portfolio. Describes the amount of an organization's loan portfolio it assumes it may lose. Loan-to-Asset Ratio: An indicator of the extent to which an organization's assets are available for lending and how aggressive an organization has been in its loan production. Loan-to-Deposit Ratio: Total Outstanding Loan Portfolio divided by total dollar value of deposit liability accounts held by a bank or thrift. Assesses the extent to which deposit liability accounts are used to make loans. Can be used as a liquidity indicator. Loan-to-Share Ratio: For insured credit unions, analogous to the Loan-to-Deposit ratio. Net Assets: Indicates the extent to which an organization's total assets exceed its total liabilities. Net Asset Ratio: Indicator of the underlying financial strength of a nonprofit organization's equity base relative to its total assets and whether it has sufficient equity to cover unexpected losses. Net Charge-off (Net Write-off): Total dollar amount of loans determined to be a loss or non-recoverable during the course of an organization's fiscal year and taken off the books less loan amounts charged-off but later collected. Net Income: Total revenue less total expenses. An indicator of the financial solvency of the organization. Net Worth: Also known as total equity. Net Worth Ratio (Equity Ratio): Net worth divided by total assets, an indicator of the strength of an organization's equity base relative to total assets.


51

Self-Sufficiency Ratio: Earned income divided by total expenses. Measures the extent to which an organization is covering its annual expenses through internally generated sources rather than grants or other contributions. Total Outstanding Loan Portfolio: Total dollar amount of gross loans receivable. The principal amount of loans receivable held by an organization that represents the amount still owed the organization by its borrowers before loan loss reserves are calculated.


Appendix C: Important Limitations of CDP Data Financial Soundness Benchmarks by Type of Institution (Tables 1.1-1.7) There are several things we should notice at the outset when looking at the descriptive statistics for type. First of all, our eyes are drawn to the fact that the medians and the means across almost all of the key financial measurements tend to be different from each other. This is an indication that we do not have a normal distribution—or rather that there are extremely low and extremely high scores on the measures within institutions types that tend to pull the mean disproportionately up or down. Ideally, with a centered or normal distribution, we would want to see the mean and the median approximate each other. This would make us more confident about describing the data. As is, we have a couple of choices. First of all, we could choose to eliminate outlying measures on the financial ratios in order to make the descriptive statistics more reliable. However, there are downsides to doing this. The most obvious one is that taking out observations leaves us with a smaller group to generalize to that may or may not resemble the entire group of organizations that participated in the CDP. Also, while outliers could be simply a-typical cases or data entry errors, but they could also be unique and valid cases that should not be thrown out. On the other hand, if we choose to use the median exclusively to describe the data since it is immune to outliers and decide not to trim out irregular observations, we find ourselves limited with the conclusions we can draw. Unfortunately, the fundamentally skewed distribution of the data will wreak havoc on any statistical tests we might try to run.7 It is also important to note that the number of firms in each group is not similar. While there are 193 credit unions and 130 loan funds in the sample, there are only 16 banks, 25 venture capital funds, and 6 multi-banks. This means that, while we may be able to feel fairly confident about our estimates of the mean and median for the larger two groups, it may be that the ones that we get for the smaller ones are too highly influenced 7

We tried fitting several different kinds of regression models to capture relationships between the financial ratios and supposed determinants (benchmark parameters) only to find that the coefficients estimated by the models were highly unreliable because of heteroscedastic error distributions-- even after statistically "adjusting" for outlying observations by using robust standard errors.


53

by sampling and/or response bias. That is, maybe multi-banks that happened to respond to the Common Data Project had unusually high or unusually low scores on the measures. Without enough observations, we cannot be sure. Different size groups also make it difficult to perform any sort of statistical tests to see if the financial measures differ significantly by type. Any such operations would be driven primarily by the two larger groups. Even if there are differences, we would probably not be able to detect them. In theory, if we knew how many of each institution existed in the population of CDFIs, we could then weight each institution type. However, this is nearly impossible since the CDP itself is the most definitive source of industry information currently available. Nevertheless, when trying to determine if type is a good determinant of the financial soundness of CDFIs, it is helpful to look at scatterplots and compare them to what we would expect to see if a perfect relationship between institution type and the measures really did exist. In such a case (see Figure 1.1), we would see a couple of key trends. First of all, if type were the primary determinant of financial soundness measures, we would see little or no variation within types. That is, all credit unions would tend to score exactly the same as would loan funds, banks, venture capital funds, and multi-banks. Second, most or all of the variation would be captured between the groups. Visually speaking, each institution type’s scores on the financial measures would look highly distinct from those of other types. This is clearly not the case in terms of across most of the financial soundness measures that we looked at in the Common Data Project. In fact, there seems to be little variation among groups which we see most clearly in the cases of net income (Figure 1.2), the self-sufficiency ratio (Figure 1.4), the net loan loss ratio (Figure 1.5), the loan loss reserve ratio (Figure 1.6), and the delinquency ratio (Figure 1.8). That is, we cannot clearly identify that one group tends to score significantly differently than the others. One things that attracts immediate attention is the high amount of variation within groups that has nothing to do with institution type. We see this most clearly in net revenue (Figure 1.3) and the net asset ratio (Figure 1.7).


54

Even so, these things do not rule out the possibility that type does not play a role in the CDFIs scores on financial soundness measures. In fact, if trimmed off outlying observations and could somehow add more banks, venture capital funds, and multibanks to the sample, we might be able to better identify any differences by type that may exist. Type could very well partially explain some of the variation. What is important to note here is that it is far from a perfect indicator and may be dominated by other unaccounted for measures that explain larger amounts of variation in CDFIs financial performance than does type alone. Score Card by Asset Size As in the case of the scorecard by type, we notice that the mean and median scores for each individual asset classification often do not resemble each other. Once again, this keys us in to the presence of outlying observations that do not fit general trends and faces us with the dilemma of eliminating them in hopes of generalizing more soundly to a smaller group or trying to deal with the data as is—even though we are thus sacrificing the ability to say more interesting things about the relationship between asset size and each of the financial measures. We also note that the groupings by total assets are not similar in size. One group has some 350 observations, while the others have only 12 and 3 respectively. These groupings were chosen based on the distribution of the data for asset size. The first group is especially large since a large number of observations tended to cluster together even if it is over a VERY wide range. The problem is that we should have a substantive reason for drawing a line to divide the cloud of observations. In the absence of clear break points, doing so might seem arbitrary. Are we really sure that firms on one side of the line are different than those on the other? The two latter groups of assets sizes are essentially two separate groups of outlying observations that seem separate from the rest. It might be recommendable to get rid of these two groups altogether and focus on the concentration of cases in the first group. Maybe then, new break-points and natural cleavages among these observations could be identified. This might also make it possible to identify statistical trends within this smaller group. The problem then becomes one of applicability—is the smaller group


55

really one that represents the industry as a whole—especially given our small number of different kinds of firms in the sample. If asset size indeed had a strong relationship with the CDFIs scores on financial soundness measures, we would expect to see something like Figure 2.1. In our example, the relationship between the ratio and asset size is a positive one, but it could very well be negative (slanted in the other direction) as well. In addition, the relationship between the variables might not be linear as in our example. It might vary well be that a ratio increases up until a certain asset size and then tapers off. Regardless, we would see observations lining up with each other, revealing clear patterns of variation along the dimension of total assets with little or no other variation or cloudiness that might be the effect of some other variable acting upon the financial measure. In the Common Data Project, we see a mixture of results. In the case of net revenue (Figure 2.3), for example, we can see a positive relationship insinuated by the scatterplot. However, we can see that much variation doesn't seem to be attributable solely to asset size since there are many differences in net revenue among like-sized firms. Similarly, we see that in the cases of net income (Figure 2.2), the self-sufficiency ratio (Figure 2.4), the net loan loss ratio (Figure 2.5), the loan loss reserve ratio (Figure 2.6), and the delinquency, the differences we see among CDFIs do not look like they have much to do with asset size. Actually, almost all the variation on these measures takes places among smaller firms. This is most obvious in the scatterplot of the net asset ratio (Figure 2.7). However, we must be cautious about saying that the asset size of firms is not important given the dearth of median and large organizations in our sample. It could well be that size does matter but that we cannot identify any of these trends clearly for lack of information about larger firms. The result of this may be the swamping of any differences across asset size by the differences among small CDFIs.


Appendix D: Negotiation and Use of Performance Measures Each CDFI that receives funding negotiates an assistance agreement with the Fund. This agreement includes a performance schedule with goals, measures, and benchmarks that the Fund uses to evaluate awardee performance. The goals and measures are derived from the application materials of the awardee, including their five-year comprehensive business plan. Staff in the Program Operations department use industry knowledge to help awardees develop suitable performance benchmarks. During the award period, awardees submit semi-annual financial reports as well as annual performance reports and narratives. Awardees are also required to submit audited financial statements each year. The Fund's Compliance, Monitoring, and Evaluation staff checks these reports for compliance against negotiated benchmarks. The flowchart on the following page depicts the negotiation and use of performance measures and goals and takes the reader from application through postdisbursement monitoring at the Fund.


Application to Disbursement Compliance/Monitoring reviews reports and makes recommendations if noncompliant. Amend assistance agreements if necessary,

Disbursement of Award

Close Assistance Agreement (Legal and P.O.)

Award selection by Program Operations Readers

Awardees submit semi-annual financial reports and annual activity reports.

Compliance check (if awardee has previous Fund award, must be in compliance to receive funding in current round.

Negotiation of Assistance Agreement with P.O.

Application received by Awards Mgmt.

Post-Disbursement Award Period


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.