Preplasma Sims, WP experiment, redoing fuchs paper

I ran some simulations adding a preplasma to the gold and aluminum targets. I worked on creating a research project for Ethan. I did some work trying to use scipy optimize with the Fuchs v3 model.

Adding preplasma to Titan Sims

Last time I introduced the Titan Single and Double pulse simulations with aluminum and gold that Nathaniel had been working on. I modified the density profile of the target on the front side to decay exponentially with a certain scale length. The back side still contains the same half-micron thickness layer of protons to enhance the TNSA profile.

Below is a plot of the electron number density profile of the target initially when there is no preplasma

no_prep_elec

The total number of electrons is $15 \mu m \times 36 \mu m \times 1 m \times 6.02 \times 10^{28} (1/m^3) = 3.25 \times 10^{19}$. This should be conserved for the target with the added preplasma. Below, I have a plot of the 1 micron scale length preplasma computed from the equations of the last post.

1mic_elec

The density in the main section of the target is now slightly lower to allow for the electrons to expand. I have numerically checked through EPOCH that the total number of electrons stays the same.

My simulation results showed that the preplasma reduces the effectiveness of the double pulse effect. Here is the proton spectrum without preplasma:

image

which has a significant increase in energy for double pulse. And here is the proton spectrum with the 1 micron preplasma

image

which enhances the proton energy spectrum from the single pulse so that the double pulse is not as good. This is to expected and that is why the experimentalists want to minimize the amount of pre-pulse delivered to the target.

Jack’s Optimization Notebooks

Jack worked on doing optimization of the fuchs function using the minimize function from scipy.optimize. From his results it seemed like the following is true

  1. The optimizer is comparable in performance to just looking at the best point in the dataset.
    • It can interpolate between values not found on the grid
    • Timewise, it takes longer than evaluating 1 million data points with polynomial model (4.8s versus 1.5s)
  2. Polynomial model slightly underpredicts actual proton count
  3. Actual optimum occurs with thickness 1.15 micron instead of 0.5 micron. This shows polynomial model may be too simple: always defaults to minimum thickness
  4. Would be more beneficial when there are large gaps in input parameter space (more input parameters) and with more complex model (like neural network)
  5. All models have a negative focal offset: good that it isn’t getting stuck in a local optimum with positive focal offset.

Optimization of Models for Paper 1

I started to work on the optimization stuff for paper 1. Our problem we are trying to solve is this: Given a desired maximum proton energy (cutoff) $E_c$, how can we produce as many protons as possible when near this cutoff (let’s say within 15 percent of cutoff)? One one hand, we want the max proton energy $E_\text{MAX}$ to be as close as possible to $E_c$. On the other hand, we want a large number of protons $N_p$. In the Fuchs Model, $N_p = E_\text{TOT}/E_\text{AVG}$ and thus having more total energy would increase the number of protons. We can express this relationship in the following way:

\begin{equation} f(E_\text{MAX}, E_\text{TOT}, E_\text{AVG}) = \frac{ \lvert E_c - E_\text{MAX} \rvert}{E_\text{TOT}} + \frac{K}{N_p} \end{equation}

where $K$ is a constant that controls the weighting we put on the number of protons produced. The first thing I did was to try and optimize the learned models over a range of different values of $K$ to see what the optimal $E_\text{MAX}$ and $N_p$ were as we changed $K$ and this is what I got:

image

I didn’t yet do GPR and NN. I found that scipy.optimize just looks for local minima, so it may not be entirely reliable to just feed 1 point. As a result, I decided to feed around 10 initial points. Overall, I found that the optimizers don’t really help that much over just evaluating the model on the full dataset of 20000 points and sorting for the best point. As a result, Chris and I talked about instead using a more limited amount of points: say around 2000-4000 points, so that there is actually some value to using optimizers. Additionally, we could just focus on one particular value of $K$ that isn’t at either 1 MeV or the max energy in the dataset, but somewhere in between. This is the most interesting regime to optimize over.

We also talked about creating one of those banana contour plots where we try to see if the optimized parameter space looks the same for all the models. One thing that might make sense is to fix the intensity and then vary the focal distance and thickness so that we can make a 2D contour plot.

One thing to note is that the benchmark we should be using is the noiseless fuchs function, because that is what the models are trying to approximate. If we look at the noisy training data, we may find some points that do better only because the noise artificially lower the maximum energy and increases the total energy in such a way that the objective function is lowered.

So, in conclusion, I should work with fewer data points in a regime of $K$ where the optimal $E_\text{MAX}$ is within 15 percent of the cutoff. To achieve this, I should modify the objective function to add a large penalty term when the energy is difference is greater than 10 percent.

Coming up Analytical Double Pulse Model

Chris mentioned that I should look at Kruer 3.3 for inspiration for the interaction of a pulse at a non-normal incident angle on a plasma with a density gradient (to get a better understanding of the preplasma interaction).

To Do

  • Read FrEIA paper that Joseph sent me
  • Work on edits to paper 1
  • Work on optimization notebooks
  • Get ready for NIF/JLF Meeting: Maybe print out a new poster
  • Read through Kruer Ch. 3
Written on January 12, 2024