The Wall Street Journal
claims that our ratings, including the Morningstar Rating for Funds (the "star rating") and the Morningstar Analyst Rating have not succeeded. The Journal
's analysis, which we disagree with, suggests that highly rated funds do not outperform low-rated funds in the future. However even using the Journal
’s own findings, which were selectively disclosed in the feature article that ran today, we find that highly rated funds were far more likely to outperform low-rated funds in the future.
We've long believed in the merit of the straightforward, transparent approach the star rating takes to ranking funds: It's an objective "report card" on funds' past performance. By the same token, we've frequently acknowledged the star rating's limitations, which are common to any measure that relies on past performance. Since launching the star rating in 1985, we've augmented it with a host of other tools and measures and made enhancements to our methodology several times along the way, the forward-looking Morningstar Analyst Rating being a signal example.
We've encouraged users to consider combining the star rating with other data and measures to aid in fund selection. In this way, users could benefit from some of the star rating's more distinctly valuable features--that is, the way it emphasizes longer time frames, accounts for risk, and measures performance after fees and charges, considerations that don't normally figure into "leaders and laggards" tallies--while leveraging other forward-looking measures like the Morningstar Analyst Rating. In that context, we've often described the star rating as a potential starting point for research.
Star Rating Performance
We've run several recent studies on the performance of the star rating. You can find them here: "Does the Star Rating for Funds Predict Future Performance?"
and "The Morningstar Rating for Funds: A Good Starting Point for Research
To summarize, what we found is that the star rating possesses moderate predictive power, which is what we'd expect of a starting point for research. It points investors toward cheaper funds that are easier to own and more likely to outperform in the future, qualities that correspond with investor success.
You wouldn't know that from reading The Wall Street Journal
piece, which portrays the star rating as ineffective in tilting the odds in investors' favor. But the Journal's own analysis
largely corroborates what we found in our own tests of the rating's performance: The odds of success were much higher in high-rated funds than low-rated funds, as shown in a panel of the Journal
's analysis (which wasn't included in the feature piece):
The left column of the above table shows a fund's starting star rating, while the second row from the top shows the subsequent rating that those funds achieved in the 10-year period that followed. What the Journal
itself found is that while high-rated funds didn't unerringly outperform over the decade that followed the rating, they were far more
likely to succeed (defined as a subsequent 4- or 5-star rating) than low-rated funds. For instance, 5-star funds succeeded about seven times more often than 1-star funds. Conversely, low-rated funds failed (defined as a subsequent 1- or 2-star rating, or that died through merger or liquidation) at a much higher rate than highly rated funds. By this definition, 1-star funds were twice as likely to fail as 5-star funds.
Again, don't take our word for it: The Wall Street Journal
is saying so. 
Analyst Rating Performance
also raises questions about the efficacy of the Morningstar Analyst Rating. For those unfamiliar, the Analyst Rating is a forward-looking, qualitative assessment of a fund’s prospects. Morningstar's manager-research analysts assign these ratings based on their evaluation of factors like people, process, performance, parent and price. The rating takes the following form (from highest to lowest): Gold, Silver, Bronze, Neutral, Negative. We expect "medalist funds" (Gold, Silver, Bronze) to outperform relevant peers or benchmark indexes over a full market cycle. "Neutral" and "Negative" funds are those our analysts have less conviction in.
Because the Analyst Rating is not quite six years old, we haven't conducted the same battery of tests on it as we have the star rating. All the same, as you'd imagine we've been tracking it, and what we've found to this point is that the Analyst Rating does a pretty good job of pointing investors toward high-performing funds. To illustrate the table below compares the average annual CAPM alpha that rated funds produced over the three- and five-year periods that followed an Analyst Rating:
What you're seeing is that, on average, higher-rated funds tended to produce higher alphas (i.e., positive excess returns unexplained by their sensitivity to the market) than lower-rated funds. This trend was more pronounced over the subsequent five-year period than the subsequent three-year period. We clearly have some work to do to improve the relative performance of Bronze-rated funds compared to Neutral-rated funds, but overall the Analyst Rating appears to have acquitted itself well. (Note that the Negative ratings cohort was much smaller than the others, accounting for only around 4% of ratings issued over this span.)
But in the Journal
’s telling, the Analyst Rating hasn't succeeded in pointing investors towards higher- or lower-performing funds. Before we get to their results, you should know that the Journal
's test was different from ours--instead of measuring predictiveness using alpha as we did, the Journal
measured predictiveness based on a fund's future star rating. Essentially, the Journal
was looking to see whether medalist funds (Gold, Silver, Bronze) had higher star ratings over the ensuring three- and five-year periods than non-medalists (Neutral and Negative).
(We had counseled the Journal
against using the star rating as a measure of the Analyst Rating’s predictiveness for a simple reason: The star rating is based on funds' trailing risk- and
load-adjusted returns versus category peers. But when analysts are assigning Analyst Ratings, they are not taking loads into consideration. So there's a mismatch of the two, a point we made to the Journal
in urging them to reconsider the star rating in favor of a risk-adjusted measure like CAPM alpha. They opted against our advice.)
Here again, we'll delve into the Journal
’s own findings--those that it included in an accompanying exhibit to the piece, not the piece itself--to illustrate that the Analyst Rating has performed well.
The left column shows the Analyst Rating at the start of each period and the columns to its right show the star rating that those funds achieved, on average, over the subsequent three- or five-year periods. For instance, from the above we learn that 30% of Negative-rated funds achieved a 1- or 2-star rating over the subsequent three-year period, on average, and another 30% died (after being merged or liquidated away), and so forth for the other Analyst Ratings and time periods.
What the Journal
’s own analysis tells us is that Gold-rated funds were almost twice as likely to succeed (defined as a subsequent 4- or 5-star rating) than Neutral- and Negative-rated funds, on average. Conversely, Neutral- and Negative-rated funds were much more prone to failure (defined as a subsequent 1- or 2-star rating, or death through merger or liquidation) than Gold-rated funds.
These results are consistent with our findings, when we've evaluated the Analyst Rating using a risk-adjusted measure like CAPM alpha. In summary, while the Journal
paints a downcast picture of the Analyst Rating, it has performed pretty well in the six years we've been assigning it, with the Journal's own findings seeming to corroborate ours.
’s story notwithstanding, the star rating has been a useful starting point for research that tilts the odds of success in investors' favor. The forward-looking Analyst Rating, while newer, has also exhibited predictive power. Used together, or separately, we think these ratings can improve outcomes and help investors achieve their goals, which is entirely in keeping with our mission as a firm.
 In its analysis, the Journal
cites average future star ratings as a measure of predictiveness. We think this approach suffers from several shortcomings, both of which we brought to the Journal
's attention in the runup to this article being published.
Read Morningstar's Don Phillips Take: The Wall Street Journal's Statistical Fog.
A message from Morningstar CEO Kunal Kapoor.
Letter to the WSJ Editors
- The rating is bounded by a ceiling (5) and a floor (1). Arithmetically, this means that any average will tend to pinch toward the middle (3). Whereas a distribution yields a better sense of the range of outcomes that an investor might experience when investing in a fund with a given rating, an average tends to obscure the range of outcomes.
- The Journal excluded merged funds when calculating the average future star rating. (It assumed that funds which were liquidated earned a 1-star rating over the subsequent three-, five-, or 10-year period.) This tends to inflate the average future rating of 1- and 2-star funds to a greater extent than it does 4- and 5-star funds, as it’s much more common for low-rated funds to be merged away in the future. Indeed, the Journal’s own analysis found that 48% of 1-star funds were merged away in the 10 years that followed the rating, vs. only 13% of 5-star funds.