So who writes the SC opinions

First off, I’d like to thank the Charles Center and the Monroe Program for this amazing opportunity, as well as my advisor Professor Sasser and my family for the support and guidance they gave me throughout the process.

Since my last post, a lot has happened, so I’ll try to be as thorough as possible. After collecting the majority of my documents I first had to clean them up. Most were collected as PDF files, and the program I was using to collect word frequencies only worked with .txt files. I also needed to remove certain words, such as those contained in citations and quotes. The reasoning behind this removal of nearly thousands of words is at the heart of my study. Ultimately my process seeks to identify a linguistic fingerprint of sorts in each of my clerks’ and justices’ writing. This fingerprint is unique to each author and is identified by the number of times they use common words, like “a” or “the”. Because this fingerprint is personal, any words not directly written by that author are considered irrelevant.

After having cleaned the documents and converted them all to .txt files, I ran them through a program called vocsoft written by a professor named Richard Forsyth, which counted the uses of each word and produced frequencies out of 1001. However, before I could use this data, I needed to identify which words I was to use in building my fingerprint. For this I returned to Inference and Disputed Authorship, the pilot study by Mosteller and Wallace I described in my last post. They recommend in section 8 of their study that anyone wishing to conduct a similar project on a smaller scale should consider using their initial list of 70 words as a basis, and further recommend removing all verbs and pronouns before narrowing the list further. This done, I was left with 55 words.

I then turned to my vocsoft data collected from the papers I knew for a fact were written by the clerks and justices respectively. This data would be used to weed out words that were unhelpful for a number of reasons.  Either they were used at such similar rates by all of the subjects that they would not reasonably distinguish between authors, or they were particular to a certain type of document and would therefore be unhelpful when it came time to analyze the opinions. I removed nine words for being too similarly frequent and three for being found in only speeches and in no academic papers. Below is a sample image of the vocsoft rate data for word instance rate out of 100.

Having finally settled on a set of 42 distinguishing words from whVocsdatich to build my authorial fingerprints, it was time to turn my rates into initial probabilities that I could later plug into my final Bayesian formula. To do this I used something called a Poisson distribution curve (equation below). This is a long standing equation often used to predict events with a steady rate (number of hurricanesPoisson in a year). In this case, it is used to convert my word use rates into probabilities. These values are available in the final data set document.

In order to make sure that my methods and data were sound, before applying the final Bayesian equation to my Supreme Court Opinions, I applied it to a few papers of known authorship, namely three of Victor Brudney’s and two of Wiley Rutledge’s. Luckily enough they checked out perfectly, with each of the author’s papers showing over 60:1 odds, or a 98.36% percent chance of being written by their authors, which is near certainty, and a perfect verification. Those results are attached labeled as method verification. It was then time to apply those methods to the Supreme Court opinions themselves and the results were somewhat surprising.

 

Before those results, a quick note on Bayesian Probability: Bayesian probability analysis is an atypical way of looking at probability. Instead of the usual predictive probability that uses the context of several variables to predict something about another variable (like knowing the number of marbles and colors to predict what color will be drawn from the bag) Bayesian probability collects data and makes a prediction of the next outcome based on that data (like drawing a marble 6 times and averaging the number of times a color is drawn to predict the outcome)2. This is particularly helpful in situations where initial data may by unavailable, as is the case in my experiment.

The formula for a basic Bayesian analysis, which is below, reflects this data based probability in its second half. The first half includes any bias of original data, but because I have no original data, that prediction can be set to 1/1 odds. My experiment deals exclusively with the second half of the equation, which is called the likelihood ratio.byesian eq

That likelihood ratio in my case is calculated by dividing the Poisson value for the clerk by the Poisson value for the justice. Ultimately, I was only able to perform that process for one clerk and one justice (Rutledge) because my second justice’s (Byrnes) data documents did not reach me in time to be included in this experiment, but I do plan on using him in the next iteration. Ultimately, two of the four tested opinions were likely written by Rutledge, with 27:1 and 3:1 odds respectively, but curiously enough, two of the opinions may have been written predominantly by his clerk Victor Brudney, with 3.5:1 and 1.5:1 odds respectively in favor of his authorship. These results are attached labeled rb cases (Note: all total odds are in terms of odds of clerk authorship).

These results have interesting implications. First, the ratios are smaller for the Supreme Court opinions than they are for the papers of known authorship. The most likely reason for this is the source documents for the initial averages. In the original Mosteller and Wallace study, they were able to use federalist papers of known authorship to draw word use averages and build their authorial fingerprints. Thus, their averages were less altered by differences between document types. My initial documents were largely articles from legal journals, so they handled many of the same topics that the opinions handled, but just as writing a non-fiction essay requires slightly different vocabulary than writing a narrative essay, there are some inherent differences between the vocabulary used in opinions and scholarly articles. This will affect how accurate the rates are at predicting authorship.

However, there is another fairly likely explanation that has more relevant implications to the conclusion of this experiment. It is likely that whether or not the clerk or the justice wrote the first draft of the opinion, one or the other would have edited it. In the editing process, they may have inserted some of their own vocabulary and muddied the waters. This seems likely and is nearly impossible to resolve using these methods.

These concerns aside, this study has some important results. First, it demonstrates that the method developed by Mosteller and Wallace translates to this area of study on a small scale. It can be used to make moderately accurate predictions about who is the major author on a Supreme Court opinion. It also opens the door to further study. The Justice I eventually examined, Wiley Rutledge, served in the 1940s and early 50s. It is entirely possible, and a suspicion of mine, that as the number of clerks under a justice at any given time expanded from one or two to four, opinions were written less often by the justices. At the next opportunity, I plan to study justices from the 70s and 90s to determine if this is the case.

Works Cited

  1. Forsyth, Richard. VOCSOFT Stylometric Software. PDF. Prince Edward, Canada: Univeristy of Prince Edward Island, November 2015.
  2. Nerbonne, John. The Exact Analysis of Text. PDF. University of Groningen, July 2007.

Method Verification

rb cases

Speak Your Mind