Friday, May 6, 2016

Grad School Rankings...What do they mean??

In light of the recent post regarding ranking competitiveness of the UAE, I started turning over the idea of rankings in my head.  We all value rankings, whether we admit it or not.  For one, rankings help us make decisions.  I'm sure a number of us peeked at the US News and World Report ranking biomedical research programs before selecting Emory as our home.  From personal experience, I found one of the prime emphases in grant writing class to be citing the number of F31 grants awarded to Emory GDBBS students (apparently we are currently ranked program number 2 in the US, and at one point last year, we were in first place). Oddly enough, despite our high ranking in terms of NRSAs, we are only ranked number 30 in terms of biological science graduate programs, according to the US News and World Report website. How could this be?  And what does statistics have to say about this?

How Emory Stacks Up...maybe.


Indeed, some rankings are based purely on objective, raw numbers, such as the NRSA statistic. The student either recieved F31 funding, or they did not. Others, such as the US News and World report rankings, and the UAE competitiveness rankings, are based on an amalgamation of a number of factors, including some that are subjective.  I did a bit of digging to figure out what the US News and World Report numbers are based on. Perusing their website left me with more questions than answers.  Any information given on how the rankings were determined is murky at best.  The most data I could find on how they rank describes their methodology for their undergraduate ranking system (the original). As one article from their own website investigated this topic correctly points out, "The host of intangibles that makes up the [college] experience can't be measured by a series of data points."  Factors such as reputation, selectivity and student retention are cited as some of the data points that determine ranking.  In terms of quantification, however, I did not find any clear answer.  The website cites a "Carnegie Classification" as the main method for determining rank. 
The Carnegie Classification was originally published in 1973, and subsequently updated in 1976, 1987, 1994, 2000, 2005, 2010, and 2015 to reflect changes among colleges and universities. This framework has been widely used in the study of higher education, both as a way to represent and control for institutional differences, and also in the design of research studies to ensure adequate representation of sampled institutions, students, or faculty.

As you can probably gather, the quantification involved in the Carnegie Classification is never really defined and left me wondering if there is some sort of conspiracy underlying these rankings.  Reading about these rankings has left me feeling what I imagine a nonscientist feels like when they try to understand scientific data from reading pop culture articles.  Still, we continue to value these rankings, even if we have no idea what they really mean.  Yet again, we revisit the theme that there is a strong need, not only for statistical literacy, but for statistical transparency. Statistical analysis needs to be clearly laid out so that the layperson can fully appreciate the true value of a ranking.

I also wonder about how quantitative and qualitative factors that are apparently used in the Carnegie Classification are combined together to determine one final ranking value.  As we have learned in class, continuous and categorical variables simply do not mix.  Yet here and most everywhere, we see them being combined. Maybe it is beyond my level of statistical comprehension, but I wonder if there is a way to correctly combine the two?

No comments:

Post a Comment