Without exaggeration, this was the busiest summer of my life thus far. When I entered this project, I had no idea of what to expect. To repeat my earlier point, the term “math research” sounds almost oxymoronic, what is there to research about numbers? What I didn’t realize at the time was just how fast possibilities can blow up. What may seem manageable with “only” a million cases to test by computer can jump well out of the range of feasibility with the addition of one more variable to consider, so it’s not possible to simply have a computer to try all your theories. Most of my time was spent on how to narrow down the search space for our theories, bringing it down from 2^50 to 2^30 to a few hundred billion to a few hundred million, from something solvable in decades to something solvable in days or hours.

I also learned a lot on the computer science side, from the variety of tools I had at my disposal to how to efficiently use them. For example, through my experimentation I discovered that in my use case, raw Python code was over 40 times slower than C, but by using techniques to optimize my code and just-in-time compilation, I was able to get it down to only 1.6 times slower. Then there are tradeoffs to consider; for example Python is more lenient on data types, but C allows for more manual control of the memory space, which can be useful once your data sets reach over 60 gigabytes in size.

Lastly, I had the incredible opportunity to run my code on William and Mary’s High Performance Computing supercomputer cluster. It’s goal is to allow students and faculty to run experiments that their personal machines can’t handle, and some of their clusters feature hundreds of gigabytes of RAM per node. Though my time on it was limited, it made me truly appreciate what tools we have access to at William and Mary.

Though we ultimately did not find the graphs we were looking for, to be able to add my experience to the corpus of mathematical work was my honor, and I will definitely be back next year to try more things.