In the past few weeks, I have discovered that the experimentalist, unlike the theorist, its book-dwelling cousin, spends a majority of its time troubleshooting equipment. As discussed previously, NMR experiments are extremely demanding on experimenters and equipment alike – after all, we are manipulating atomic nuclei on a quantum level. Given the rigorous requirements of these experiments, it is natural that equipment failure will occur and technical problems will arise. Since my last post, the lab has faced a number of such technical problems, including a broken preamplifier, temperature controller, and probe. Each of these components play a necessary role in NMR experiments: the preamplifier, for instance, is necessary for data collection to occur. As previously mentioned, NMR measures the relaxation of magnetization vectors as they precess back to their equilibrium state. These precessions take place on the quantum level; as such, the resulting signals are incredibly small, usually on the order of microvolts (10-6 volts). Therefore, a powerful preamplifier is needed to boost the signal before it is sent to the analog-to-digital converters (ADCs) and onto the computer for signal averaging. Despite its relative importance, the broken preamp was only a minor inconvenience – there are several other preamps in the lab, which will be used until the broken amp is repaired or replaced.
The loss of the spectrometer’s temperature controller provided a similar hiccup in operations. During NMR experiments, it is often necessary to keep the sample at a constant temperature. The variable temperature (VT) controller is used toward this end. This device feeds a stream of ultra-pure nitrogen gas into a thermally insulated glass dewar inside the probe. A small heating coil is mounted on the outside of the dewar, as is a temperature sensor. Using the data from the sensor, the VT controller adjusts the heater output, allowing for ultra-precise temperature control. Temperature regulation has not been necessary for most of my experiments, so the error was not discovered until last week, when the spectrometer was rebooted. When the VT controller was activated, two of its fuses blew out, rendering it useless. The burned-out fuses were replaced with fresh ones, but the new fuses fared no better. The symptoms seemed to indicate that the VT controller’s power supply was malfunctioning. Rather than trying to replace the power supply, another VT controller was ordered to replace the broken unit. The new unit arrived a few days ago, and was promptly installed, solving yet another of the lab’s technical problems.
Not all technical issues, however, are quick fixes. The 2.5 millimeter probe, which is used to insert samples into the magnetic field, also seems to be malfunctioning. Recall that when solid samples are analyzed via NMR, they are usually spun at high frequencies (typically 30 kHz). This high-speed spinning averages out any orientation-dependent, anisotropic spin interactions (such as quadrupolar coupling), resulting in narrower line shapes. However, the fiber optic spin sensor on the 2.5 millimeter probe appears to be broken: the spin rate controller cannot measure the spin rate, making it impossible to stably spin samples. Without the ability to spin samples, any data collected is useless. As such, no scandium dioxide experiments can proceed on the high-field equipment until the probe is repaired. We are currently in the process of sending the probe away to be fixed; however, it will probably be several weeks before it is reparied
In spite of these setbacks, data collection is moving forward on the low-field magnet (7.0 Tesla). Currently, we are attempting to measure the T1Z value of scandium oxide using a “saturation-recovery sequence”. As discussed previously, NMR experiments measure the movement, or relaxation, of a sample’s magnetization vector from an excited state to its equilibrium state. This relaxation process is often modeled exponentially, as given by the equation M(t)=M0(1-e^(-t/T1Z), where M(t) is the magnetization at a given point of time, t, M0 is the equilibrium magnetization, and T1Z is the relaxation time (the time needed for the magnetization vector to return to 63% (or 1-1/e) of its equilibrium state). A multi-pulse sequence is used to measure T1Z. First, the sample is hit with a train of 90 degree pulses. These pulses generate an equal distribution of high-energy and low-energy spins, that is, they saturate the sample. The equal spin distribution means the sample has no net magnetization. The sample’s magnetization is then allowed to recover for a short time t. After this delay, the second pulse is used to measure the magnetization vector’s recovery (M(t)). This sequence is repeated for several different t values. Using the measured values and the equation above, the sample’s T1Z value can be determined using an exponential best fit program.
While the theory behind this technique is sound, my results will be limited by the low resolution intrinsic to the low magnetic field measurements. The signal-to-noise ratio and the peak resolution of an NMR spectrum are directly proportional to the strength of the magnetic field used in the experiment. In many ways, NMR spectrometer are like radios – using the spectrometer, an experimenter “tunes in” to nuclei-specific frequencies and “listens” to how the nuclei respond to electromagnetic pulses. Like radio operators, NMR spectroscopists must also deal with background noise, which interferes with the signal acquisition. Although it does not act as a detector, the magnet used in NMR is somewhat analogous to the antennae of the radio – if the radio has a bigger antennae, more signal is collected with less interference from background noise. Similarly, if a bigger magnetic field is used in the experiments, the ratio of signal to noise and the spectral resolution increase. Since I am using a much weaker magnet (7.0 Tesla versus 17.6 Tesla), the signal-to-noise ratio of the resulting spectrum will be much lower than it would be on the high-field gear, and my data will be more error-prone as a result. The increased influence of noise is proving to be extremely problematic – I have already run two saturation-recovery experiments, but amount of noise present has rendered the spectra practically unusable. The problem is compounded by scandium oxide’s infuriatingly long T1Z value. An experiment’s signal-to-noise ratio increases with the square root of the number of scans performed (that is, an experiment with 1024 scans will have only twice the signal-to noise of an experiment with 256 scans). In order to get a decent signal, each saturation-recovery sequence would require several thousand scans. However, scandium oxide’s T1Z value means that such lengthy experiments would take months to run, rendering them impractical at best.
On a lighter, unrelated note, I have begun processing the results of the multi-pulse sequences performed earlier. More specifically, I have been comparing the relative efficiencies of three different three-quantum multi-pulse sequences to determine which provides the most well-resolved scandium oxide spectrum with the best ratio of signal-to-noise. Thus far, it seems that the double frequency sweep provides the strongest, clearest signal, making it the most ideal for future experiments. Interestingly, all three of the multi-pulse sequences are showing a strange bump which was not present in our simulated spectrum. This unexpected spectral feature could be the result of chemical impurities, but it could also indicate a site in scandium oxide’s crystal structure with a lower T1Z value. Only further experiments (on the high-field gear, presumably) will tell.