The Data Behind the College Free Speech Rankings

By Sean Stevens

Every year, high school students weigh the pros and cons of various colleges and choose which one to attend. An abundance of resources exists to help these students consider all the factors that may prove relevant to their decision. Yet most of these resources don’t provide students with information on a fundamental aspect of the college experience: whether a college promotes, merely facilitates, or actually hampers the free exchange of ideas.

The College Free Speech Rankings offer students such insight by providing a glimpse into the social environments of each college surveyed.

How were the rankings calculated? A total of 24 questions were presented to each student taking the survey. Each of these 24 questions is an observed variable ­– so the dataset for each student’s response contains 24 total observed variables.

To compute our rankings, we first performed an exploratory factor analysis (EFA) of all the observed variables, followed by a series of confirmatory factor analyses (CFA). The basic assumption of an EFA is that one or more underlying variables, referred to as factors, can explain the significant correlations between certain survey questions or observed variables. The number of factors identified by an EFA is typically smaller than the number of observed variables.

For instance, scholars have created a measure of an individual’s willingness to self-censor consisting of eight questions that ask about willingness to express one’s opinion in social settings. An EFA can reveal if all eight of these items “hang together” well enough (i.e., have strong enough correlations with each other) so that the entire eight-question survey can be considered a measure of a single underlying factor, in this case willingness to self-censor.

If more than one factor is identified, however, then the survey questions may be measuring multiple factors. In an EFA, the analyst does not determine the number of underlying factors – the data do. While a CFA is similar conceptually, here the analyst predetermines the factor structure and then assesses how well that predetermined factor structure accounts for (or “fits”) the correlations within the data.

Our EFA revealed two primary factors: Openness and Tolerance. Openness consisted of the eight topics that students could identify as difficult to have an honest conversation about on campus. The Tolerance factor consisted of six questions asking about support for or opposition to controversial speakers on campus. Three additional factors were also identified: Administrative Support (two survey questions), Self-expression (one survey question), and the FIRE Spotlight ratings.

A CFA was then used to test how three different models fit the data:
●      A “two-factor” model with Openness and Tolerance as the factors;
●      A “four-factor” model that also included Administrative Support and Self-expression; and
●      A “five-factor model that also included the FIRE Spotlight rating.

The goal of the CFA is to identify which of the three models best “fits” the data and offers the most explanatory power for student responses.

These analyses revealed that a solution that included all five factors was appropriate for two reasons: 1) The five-factor model fit the data just as well as the two-factor model; and 2) It explained more of the variance in student responses. Our weighting scheme for how each factor was incorporated into the index that determined the rankings was also based on these analyses.

Although it is extremely unlikely that any college would ever earn a perfect score, and a college can hardly be held responsible for the influences of culture beyond campus, there certainly is room for all colleges to improve their speech climates, including the ones scoring highly in our College Free Speech Rankings.

For instance, this spring, just as we were about to launch our survey, a free-speech controversy occurred at the University of Chicago when a conservative Hispanic female student, participating in an initiative sponsored by the Institute of Politics, said, “I vote because the coronavirus won’t destroy America, but socialism will.” In response to her comment, she faced an onslaught of criticism and even threats of violence. Similar incidents also occurred at Kansas StateTexas A&M, and UCLA.

We hope that these data can help provide professors and administrators with a pathway to improving the free speech culture on American college campuses. We also hope that the specific knowledge about individual campuses now available is useful to students looking to make an informed decision about what college to attend.

This article was originally published on RealClearEducation

Sean Stevens, Ph.D. is a Senior Research Fellow, Polling and Analytics for the Foundation for Individual Rights in Education (FIRE). The College Free Speech Rankings are a joint project produced by FIRE, research firm College Pulse, and RealClearEducation.