Improving DAQ scripts, Fuchs Paper Updates
This post covers work from 03/25 - 04/05. I focused on improving the DAQ process at AFRL and also on getting the synthetic data ready for publication.
Revising the Waveplate looping program
In the last post, I showed that there was an issue with the waveplate looping. If I look at the correlation plots from last time, it looks like the main pulse is biased to be larger and the pre pulse is biased to be smaller. This can be seen by taking the cos-squared of the waveplate angle.
and making a histogram (in this section, I am not looking at real data)
or KDE plot
We can see that there is a bias in which energies are being explored. Nathaniel told me that the actual angle on the WP and the angle to the laser axis are not necessarily the same. This is why the MP and PP distributions seem to be flipped (maybe there is around a 90 degree offset from the laser axis) and this is something that can be calibrated the day of. My proposed solution is to find out wherever the maximum energy is (for the main pulse) and minimum energy is (for the prepulse) and store those angles as reference angles. At these maximum (or minimum) angles we are oversampling the most and would need to slow down the rate of data collection. I can demonstrate my solution with an example:
Say, we have a total of N = 100 points between a max (or min) angle and the ending angle (which may be something like 15-30 degrees off from the max/min angle) during a time $T$. We’ll call this angular range $\Delta \theta$. If we were moving at the normal linear rate, we’d have a step size of $\Delta \theta/N$. Instead, for the first $T/3$ seconds we should move with a step size $5 \Delta \theta/(3N)$, the second $T/3$ seconds we should move with a step size $\Delta \theta/N$, and the last $T/3$ seconds we should move with step size $\Delta \theta/(3N)$. In this way, the larger step size initially means that we are sampling fewer points in that initial region we were oversampling. I can plot a histogram of intensities sampled this way as:
which is much closer to a uniform distribution.
Fixing Mightex Shot Counts
I came up with a simple fix to revise the shot counts in the following code
i = 1 # index of x
while (i < len(x)):
if x[i] < x[i-1]:
x[i:] += 2**16
i += 1
this will just detect any time we go from higher to lower shot count (when we overflow past the $2^{16}$ limit) and add that limit to the rest of the numbers in the sequence. This runs in a fraction of a second with millions of shots.
Optimization Scheme
To try to optimize an electron signal, I am focusing on the following two quantities: the median pixel (or energy) and the number of electron counts (within the IQR). I created an objective function that is dependent on some specified cutoff energy $KE_c$ and a hyperparameter called $\alpha$ (lower $\alpha$ favors more electrons over matching the cutoff energy). There is value in making the two terms of the optimization scheme approximately equal. For this reason, I’ll define the objective function as:
\begin{equation} f(KE_c, \alpha) = 1 + \beta \frac{\lvert KE - KE_c \rvert}{KE_c} + (1 - \beta) (1 - \log_{10}(N_e)/8) \end{equation}
which makes $\beta = 0.5$ an approximately middling value for the objective function that yields the same results. The theoretical maximum for the number of electrons is 2.3908e8 if every single one of 3648 pixels was saturated at $2^16$ counts, so we can think of the 8 as being a scale that represents an upper bound on the amount of electrons. To come up up with an optimal region, I first decided to filter the data by including only the 100 lowest points of the desired objective function. Then, I calculated the mean of these 100 points (excluding the objective function column) and found the lowest 25 percentile of points that lie closest to this mean to help avoid outliers and only consider relevant points. I additionally fit the MP and PP values to a min max scaler for normalization purposes so that this euclidean distance could even make sense.
Other things
Also, I looked through the Matlab code and created a Pspec Calibration file (Energy and Pixel maping). I plot digitized the Proton Spectra and converted that to a mapping of peak intensity, focal distance, and max proton energy. In looking through all this stuff, I realized that the data consists of 700 shots going from focal distance $-36.4 \mu $m to $33.5 \mu$m in $0.1 \mu$m increments. Both plots filtered pixels such that the energy is greater than $0.5 $ MeV and the Pspec camera has higher energy values for smaller pixel counts.
To Do
- Work on Optimization Script to implment at Wright Patt
- Work on submitting the synthetic data paper.