Skip to main content

What are CAMELS and who should know?

January 1, 1999

Authors

Ron J. Feldman Senior Financial Specialist
Jason Schmidt Senior Financial Economist
What are CAMELS and who should know?

The end of the year is a natural time to contemplate performance reviews. Students receive their A through F grades, bosses complete year-end assessments and best and worst lists abound. In that spirit, this article describes the rating system used by supervisors to assess banks, reviews the grades for Ninth District banks and briefly discusses some policy issues raised by bank ratings.

In 1979, the bank regulatory agencies created the Uniform Financial Institutions Rating System (UFIRS). Under the original UFIRS a bank was assigned ratings based on performance in five areas: the adequacy of Capital, the quality of Assets, the capability of Management, the quality and level of Earnings and the adequacy of Liquidity. Bank supervisors assigned a 1 through 5 rating for each of these components and a composite rating for the bank. This 1 through 5 composite rating was known primarily by the acronym CAMEL.

A bank that received a CAMEL of 1 was considered sound in every respect and generally had component ratings of 1 or 2 while a bank with a CAMEL of 5 exhibited unsafe and unsound practices or conditions, critically deficient performance and was of the greatest supervisory concern. While the CAMEL rating normally bore close relation to the five component ratings, it was not the result of averaging those five grades. Rather, supervisors consider each institution's specific situation when weighing component ratings and, more generally, review all relevant factors when assigning ratings. (A similar process and component and composite system exists for bank holding companies.)

The UFIRS was revised at year-end 1996 and CAMEL became CAMELS with the addition of a component grade for the Sensitivity of the bank to market risk (that is, the degree to which changes in market prices such as interest rates adversely affect a financial institution). The end of 1996 also saw a change in the communication policy for bank ratings. Starting in 1997, the supervisors were to report the component rating to the bank. Prior to that, supervisors only reported the numeric composite rating to the bank. Supervisors continue to forbid the release of a specific bank's component and composite ratings to the public, raising issues discussed below.

CAMELS ratings in the Ninth District as of the third quarter of 1998 reflect the excellent banking conditions and performance over the last several years. Comparison between the distribution of ratings in the most recent quarter and 10 years ago during the height of the national banking crisis is illustrative (221 banks failed nationally in 1988 while 3 banks failed in 1998). Nearly 100 percent of Ninth District banks currently fall into the top two ratings with 40 percent receiving the top grade. Ten years ago one-third of Ninth District banks fell into the bottom three ratings and only about one of 10 banks received the highest grade.

CAMEL ratings graph


Policy questions

Analysts have raised a number of questions about bank ratings, focusing in particular on their ability to measure bank financial health relative to alternative systems. For example, Federal Reserve economists found that CAMELS ratings were better able to forecast bank distress than statistical monitoring regimes, but only when the CAMELS rating was "fresh" (assigned within the last six months).

Perhaps more important is the capability of bank ratings compared to market measures of bank health. If the market measure of bank risk taking can be shown to be superior to the CAMELS system, this could bolster arguments for supplementing the supervisory structure with market processes. One way to make such a comparison is to determine if CAMELS ratings reflect assessments of bank risks before market measures capture such risk. In contrast to previous work, recent research has found that bank ratings capture bank risk taking several months before market pricing reflects it. (DeYoung, Flannery, Lang and Sorescu, "The Informational Advantage of Specialized Monitors: The Case of Bank Examiners," Working Paper 98-4, Federal Reserve Bank of Chicago.)

The potential that bank ratings have new, valuable information has led some analysts to call for the release of the ratings to the public to improve market assessments and disciplining of bank risk taking. More generally, this bank has suggested that additional disclosures of financial information by banks could improve market discipline; this could include the information examiners use to derive the CAMELS rating, but not necessarily the CAMELS rating itself. ("Fixing FDICIA," 1997 Annual Report, The Region, March 1998.)

However, the release of the CAMELS rating is controversial because of its potential costs. For example, public release could alter the dynamics under which supervisors produce the ratings. In particular, release could make bankers more sensitive to their ratings and thus make the examination process more contentious and less open to forthright sharing of information.

The potential response to public release from depositors and bankers could also lead to a change in the behavior in the examiners who assign the ratings. For example, they could be slower to change ratings or may use alternative means of communicating their findings to bankers. As such, the release of the rating could have the perverse effect of reducing the new information they contain. The existing difficulty in weighting the often intangible cost and benefit of public release suggests that any policy change would involve contentious debate and require additional research.