Re-doing Fuchs Paper, Working on INNs, Asymmetry Modification
Fuchs Paper Redux
After the rejection from HPLSE, I decided to rework the paper, improving on a few things. First, I properly split the two datasets into a training and testing set. I varied the number of training points from 200 to 5000 points.
Within the training set, I performed 5-fold CV while performing Randomized Searches and Grid Searches to find the optimal hyperparameters for all the regression algorithms (SVR, GPR, Polynomial Ridge Regression). For the Neural Network, I set aside 20 percent of the training data as a validation set and stopped training when the validation loss stopped decreasing.
All algorithms were able to run on CPU, but I also ran on GPU as well to compare run-time performance (except the polynomial regression which has an almost instantaneous run-time on CPU which I didn’t run on GPU because RAPIDS does not support multi-output polynomial regression).
I included Grid-Search .pkl
files for the grid-search results. However, after getting back from Taiwan, Chris said that we should also modify the fuchs model so that it is asymmetric with respect to focal distance to more closely reflect the experiment.
Fuchs V4.1 Modification
To account for the peaks in the max proton energy not being symmetric with respect to focal distance, Chris sent us a paper by Decker which described some physics of short-pulse lasers traveling in underdense plasmas. Jack’s model already accounts for an exponential pre-plasma created by a pre-pulse heating the target and causing it to expand. More details can be found here.
We start with the expanded target that has a maximum number density $n_\text{max}$ with an exponentially decaying in distance $x$ profile dependent on a scale length $c_s t_0$. Here, $c_s$ is the speed of sound at the pre-plasma temperature and $t_0$ is the delay time between the pre-pulse and main pulse in which the target has time to expand. We can write this as
\begin{equation} n(x) = n_\text{max} \exp(-x/(c_s t_0)) \end{equation}
We assume the target is initially overdense, but as it expands, it develops a point on its decaying tail that transitions from overdense to underdense at the critical density $n_c$ at a position $x_0$ defined by $n_\text{crit} = n_\text{max} \exp(-x_0/(c_s t_0))$ which leads to
\begin{equation} x_0 = c_s t_0 ( \ln(n_\text{max}) - \ln(n_\text{crit}) ) \end{equation}
and Jack had found the tail of the expanded target to end at $x_f = 5.517 c_s t_0$. The etching velocity $v_\text{etch}$ is defined in the Decker paper by
\begin{equation} v_\text{etch} = (\omega_p/\omega)^2 c \end{equation}
where $\omega_p = \sqrt{\frac{n e^2}{m_e \epsilon_0}}$ is the usual plasma electron frequency which is dependent on $x$ because the density $n$ is a function of $x$. To get an estimate of the “etching distance” we can integrate the “etching velocity” with respect to time.
\begin{equation} L_\text{etch} = \int v_\text{etch} dt = \int \frac{n_\text{max} \exp(-x / (c_s t_0)) e^2 c}{m_e \epsilon_0 \omega^2} dt \end{equation}
To proceed, we can do a few different things.
- Set $L = c \tau_\text{FWHM}$
- Change variables by using something like $dx = c_s dt$ so that we can integrate in terms of $x$ from $x_0 \rightarrow x_f$.
- Simply compute the average density of $n(x)$ is the interval $x_0 \rightarrow x_f$ and just use this singular value for all calculations so that we don’t have to integrate over $v_\text{etch}$.
I am not sure exactly how to proceed. The paper even mentions that local pump depletion occurs only for $I \lambda^2 > 10^{19}$ W/cm2 $\mu$m, and we are not in this regime: this could be problematic. Another weird thing we do is to use $n_\text{crit}/10$ instead of just $n_\text{crit}$.
Invertible Neural Networks
I tried to work more on using Invertible Neural Networks. The issue is that most of the examples are for convolutional neural networks for use anaylzing images. The mathematics behind invertible neural networks seem confusing and they use additive and affine coupling blocks to encode the invertible aspects. There are no video tutorials that exist anywhere as far as I know and the documention online is very bare-bones. It seems too difficult to try to implement the invertible neural network at this stage and I don’t have too much motivation to continue exploring it at this point, because the only resources I can find are computer science research papers which are fairly dense and devoid of simple examples similar to what we are doing.
Things to Do
- Finish compiling results and update Overleaf on github
- Continue working on etching velocity calculation.