Scaling with cell size for 10eV sims, New Arber-like formula, Pytorch, Explanation of Hockney Heating Papers

Peculiarity of 10eV Sim (Cont)

Last week I tried to understand why exactly the 10eV simulations did not seem to increase heating as we increased the cell size. I decided to run some sims with 100eV for higher grid spacings: ${12, 15, 18, 25}$ nm.

image

From these results, it is clear that somewhere around 10nm, this relationship breaks down. For the 10eV sims from last week, it broke down around 3nm. If I were to guess, this relationship would breakdown for 1000eV around $\sqrt{10} * 10 \approx 31$ nm and for 10000eV around $100 nm$.

Here, is a plot with 1000eV

image

In this plot, we start seeing the slope peak and stay constant around 25 nm. This tells me that my hunch might be correct and that there is some limit to the scaling of the heating with $\Delta x$. I predicted that it would be proportional to $\sqrt{T_0}$ because the Debye Length is proportional to $\sqrt{T_0}$. So, maybe, when the ratio of the Debye Length to the Grid Spacing goes below a certain amount, we no longer get the $\Delta x^2$ scaling. According to my results, this would suggest that $\lambda_D / \Delta x \sim 0.03$ is the limit. Hockney, in one of his papers (explained later) even says that none of his models ‘may be considered for’ $\lambda_D / \Delta x < 0.07$. He wrote this paper in 1971, so it is fair to assume that progress in developing PIC simulations has allowed us to even further under-resolve the Debye Length.

Creating a new Arber-Like Model that takes into account the initial temperature.

Last week I created a plot that showed how well the Arber formula predicted the heating estimates. From this plot, it seemed like the Arber formula did not do a very good job, at least with the simulations I currently ran. One thing to note is that the temperature dependence seems very off. It is clear that there is a temperature dependence of the heating if we focus on the points where the grid sizes are the same. As a result, I tried to create a model that would account for this temperature dependence.

The Arber Paper says the heating time is

\begin{equation} \tau_H \sim \frac{n_{ppc}}{\omega_{pe}} ( \lambda_D / \Delta x)^2 \end{equation}

which is something I show below in the section on Heating Times (Hockney’s Papers). The way this dependence on the Debye Length and Cell Size was developed was from an empirical fit. Dimensionally, we would need to take into account some ratio of Debye Length to Cell Size, but there is no particular reason it has to be to the second power (that is just what fit best in their simulations). Assume that, instead, we have the following model

\begin{equation} \tau_H \sim \frac{n_{ppc}}{\omega_{pe}} ( \lambda_D / \Delta x) \end{equation}

if we now solve for $\frac{dT}{dt}$, the temperature dependence of $T_0$ does not cancel out:

\begin{equation} \frac{dT}{dt} \propto \frac{n_{23} \Delta x \sqrt{T_0}}{n_{ppc}} \end{equation}

Using the same parameters from the plot I made last week, I used this modified formula to develop a new model:

\begin{equation} \frac{dT_{eV}}{dt_{ps}} = \alpha_H \frac{ n_{23} \Delta x_{nm} \sqrt{T_0} }{n_{ppc}} \end{equation}

I made a similar plot, adding in the model prediction:

image

From this, we can see that my model much more closely aligns to the results of the PIC simulation. Additionally, I made another one of these plots, but this time I normalized the values relative to the PIC simulation (rather than the Arber formula):

image

I drew some gray lines at the 1 and -1 points on the y-axis to indicate the regions one order of magnitude above and below the PIC sim results. We can clearly see that the $\color{red}{Arber formula}$ often falls below the results of the $\color{blue}{PIC sim}$, whereas $\color{green}{my model}$ is much closer to the results of the PIC sim. My model is not perfect, by any means, but it no longer shows that strong temperature dependence that the Arber formula has.

Some of the worst predictions of my model come from the simulations with 10eV. I believe this is due to the issue I was talking about earlier where the scaling with $\Delta x$ for 10eV breaks down around 3 nm.

Scaling with Initial Temperature

Since I now think that a $\sqrt{T}$ scaling is more appropriate, I decided to make one of those log-log plots that I have done before, but with respect to the initial temperature in electron volts.

image

Here, we see that for all but $\Delta x = 1$ nm, we get a slope that is somewhat close to $\frac{1}{2}$. When $\Delta x = 1$ nm and $T_0 = 10000$ eV, the heating-time graph looks very noisy, so this may be an outlier.

Scaling with Cell Size

I did another one of these log-log plots with the grid-spacing and here are the results

image

Here it becomes more clear that the squared scaling definitely does not work for the lower temperatures (more under-resolved debye length).

Download of Pytorch

Currently, Pytorch only supports python 3.7-3.9. By default, python 3.9 is not available on OSC, but Tom has it downloaded. So, we can use

source /fs/project/PAS1066/zhang_anaconda/anaconda3/bin/activate

to activate conda that has python 3.9 available.

Conda Environment

conda env list

This will list the already created conda environments. To remove an environment, type:

conda env remove -n env-name

To create a new environment (let’s call it pytorch-env):

conda create -n pytorch-env python=3.9

conda activate pytorch-env

Then, we can install some supporting packages

conda install h5py imageio jupyter matplotlib numpy tqdm

Then, we should follow instruction on https://pytorch.org/get-started/locally/ to download pytorch. If just trying to run on CPU, should download using the ‘cpuonly’ flag. When downloading on OSC or Unity (or anywhere with CUDA Availability), we should check the CUDA version. to do this, we can type

ls /usr/local/cuda

which should list all available versions of CUDA. At the moment, OSC doesn’t seem to have CUDA 11.3, so it looks like I should use CUDA 10.2. From the pytorch website, I would download this using

conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch

To test if CUDA is installed correctly, we can type:

python()

import torch

torch.cuda.is_available()

This should return true, however on OSC and Unity, we must specifically specify the GPU through a batch script or interactive session in order to return true. Ricky has a page on LASERDOCS showing how to get the GPU stuff working on OSC and a sample batch script to run a test file.

Heating Time and Arber Formula

I’ve decided to do my best and finally try to figure out how Arber came up with the heating time formulas. The following is based on Chapter 9 in the book by Hockney which references his 1971 and 1974 papers.

Why Does Numerical Heating Happen?

In our PIC simulation, we compute the velocities at the next timestep $v_{i+1}$ by taking the velocity at the previous timestep $v_i$ and modifying it by some correction term that is proportional to the timestep length $\Delta t$. However, due to the fact that we use finite grid, approximate real particles with macro particles, use a finite time-step, errors in rounding, use of finite-difference, etc. we will develope some errors in our calculation of the (let’s just consider electric) field $\delta E$. This would correspond to miscalculation in the force by $\delta F = e \delta E \Delta t$ which would deliver an impulse $m \delta v$. As a result, our error in calculating the velocity will be

\begin{equation} \delta v = \frac{e}{m} \Delta t \delta E \end{equation}

Now, we can make an assumption that these errors in the calculation of fields will be distributed in a random fashion, so that we can treat each velocity deviation as a random walk in velocity space. If we consider $\Delta v$ as the total deviation of the calculated velocity from the true value, we should expect $\langle \Delta v \rangle = 0$ due to the symmetry of this random walk. However, the squared deviations on average will increase over time. If we are considering $n$ time-steps (each with the same random error $\delta E$), we would have

\begin{equation} \langle \Delta v^2 \rangle = n \delta v ^2 = n \frac{e^2}{m^2} \Delta t^2 \delta E^2 \end{equation}

Additionally, the average change in kinetic energy per particle would be given by

\begin{equation} \langle h \rangle = \frac{1}{2} m \langle (v_0 + \Delta v)^2 \rangle - \frac{1}{2} m \langle v_0^2 \rangle = \frac{1}{2} m (2 v_0 \langle \Delta v \rangle) + \frac{1}{2} m \langle \Delta v^2 \rangle = \frac{1}{2} m \langle \Delta v^2 \rangle \end{equation}

due to $\langle \Delta v \rangle = 0$. We can clearly see that this means that the average kinetic energy per particle will increase linearly with the number of time-steps $n$. And, in fact, this is exactly what we see in PIC sims.

Collision Time

The PIC simulations that Hockney is trying to simulate represent an ideal Vlasov collisionless plasma. In an ideal, uniform, collisionless plasma, the orbits of ions and electrons are all straight lines. However, in the PIC simulation, errors in the calculations of velocities and positions will result in deviations from straight lines. Hockney defines the collision time $\tau_c$ as the time it takes for this average RMS deflection to be 90 degrees $\sqrt{\langle \phi^2(\tau_c)\rangle} = \pi/2$.

Hockney hypothesizes that the collision time will be almost entirely based on the following variable

\begin{equation} \tau_c / \tau_{pe} \approx \frac{n_{ppc}}{\Delta x^2} \lambda_D^2 (1 + (W/\lambda_D)^2) \end{equation}

where $n_{ppc}$ is the number of computational macro-particles per cell and W is the effective shape of the macroparticle which depends on the shape function being used (tophat is $W = \Delta x$ for example). $\tau_{pe} = \frac{2 \pi}{\omega_{pe}}$ is the plasma oscillation period. Hockney says “The particle width plays the role of the shielding distance and hence the characteristic number determining collisional effects corresponding to the number of particles per Debye square is the number of particles within a cell”

Hockney says that with this formula, within 20%, all his measurements can be fitted by this relation in which he varied $0.03 < \lambda_D / \Delta x < 8.33 $.

Heating Time

Hockney defines the heating time $\tau_H$ as the time it takes for the average energy of the system to increase by $\frac{1}{2} k_B T$. In 2D, the average energy of the electrons will be $k_B T$ (due to the equipartition theorem) for each the electrons and ions (assuming they start at the same temperature) and thus the heating time will be the amount of time to increase the total kinetic energy of the system by 25% (because we assume that the electrons are doing all the heating while the more massive ions are not).

He finds, with his simulations, that the heating time is given by

\begin{equation} \tau_H / \tau_c = \frac{K_H}{(1 + (2 \Delta x / \lambda_D)^2)(1 + (\omega_{pe} \Delta t)^2)^{1/2}((\omega_{pe} \Delta t)^4 + (\Delta x / \lambda_D)^4)^{1/4}(1 + (v_{th} \Delta t / \Delta x)^2)} \end{equation}

which is a complicated function that is dependent on 3 dimensionless quantities: ${\lambda_D / \Delta x, \omega_{pe} \Delta t, v_{th} \Delta t / \Delta x}$. the $K_H$ is a proportionality constant that depends on how the particles are represented (shape function and current smoothing are two of such factors in EPOCH).

However, if we restrict ourselves to some ‘optimal’ time-step where

\begin{equation} \omega_{pe} \Delta t_{opt} = min(\Delta x / (2 \lambda_D), 1) \end{equation}

then he claims that this equation can be reduced to a more simple form that is no longer dependent on the time-step

\begin{equation} \tau_H / \tau_c = K_H (\frac{\lambda_D}{\Delta x})^2 \end{equation}

image

This region of the ‘optimal’ parameter space is informed by the following considerations:

  1. $v_{th} \Delta t / \Delta x < 1$ should be true because we don’t want a particle to travel more than one grid cell in one time-step
  2. $\omega_{pe} \Delta t < 2$ so that the explicit finite-difference approximation to Newton’s Laws of Motion is stable
  3. $v_{th} \Delta t / \Delta x > 0.1$ because making the time-step too short won’t add any precision due to the finite grid-spacing of $\Delta x$.

For a tophat shape function (he calls it ‘Cloud in Cell’), Hockney finds $K_H = 100$

Relation to Arber Formula

Now, let’s take the heating and collision time estimates to derive the formula from the Arber paper. Let’s multiply both of those equations to get

\begin{equation} \frac{\tau_H}{\tau_c} \frac{\tau_c}{\tau_{pe}} = \frac{\tau_H}{\tau_{pe}} = k_H \frac{\lambda_D^4}{\Delta x^4} n_{ppc} (1 + \frac{\Delta x^2}{\lambda_D^2}) \end{equation}

Now, if we assume $\lambda_D \ll \Delta x^2$, we can get a conservative estimate for the heating time. If we didn’t make this assumption, the heating time would be larger (less heating). So by making this approximation, we are getting an upper bound to the amount of heating from this model. I will substitute $K_H = 100$ as well.

\begin{equation} \tau_H \approx 200 \pi \frac{n_{ppc}}{\omega_{pe}} (\frac{\lambda_D}{\Delta x})^2 \end{equation}

At this point, I tried to convert units to get the expression that Arber produced (eq. 30 in his paper). Maybe I’m just bad at converting units, but was not able to confirm the result of $\alpha_H = 5 \times 10^4$ for a tophat shape function.

But regardless, we would get the heating estimate by defining

\begin{equation} \frac{dT}{dt} = \frac{T_0}{\tau_H} \end{equation}

which corresponds to the following differential equation for $T$

\begin{equation} T(t) = T_0 (1 + t/\tau_H) \end{equation}

which shows the desired linear increase of temperature over time.

Things to Do

  • Work through Tom’s Notebooks and Ricky’s Write up to better understand his code
  • Can continue trying to install NF4IP
  • Prepare for TAing next week.
  • Can Re-run Arber-Like simulations keeping ratio of Debye Length to Cell Size Constant.
Written on August 15, 2022