How to cook nuclear pasta with Python
Theory prerequisites
Disclaimer: I wrote this in sophomore (U1) year (2016) and it has not yet been thoroughly vetted.
Neutron Stars
A Neutron Star is one of three types of zombie stars, or, in more physically-apt terms, one of three types of compact objects. These three types of compact objects, which each form after the “death” of a normal star, are the black hole (BH), neutron star (NS), and white dwarf (WD).
A star is said to die when it runs out of “fuel”, and, because it is no longer pumping out thermal energy from its inside, it can no longer push out on itself, and collapses on its own gravity. When this gravity is too strong — when the mass of the dying star is too large, a BH will form. In all other cases, this collapse will be stopped by something called degeneracy pressure.
Degeneracy pressure is a pressure that is caused by quantum mechanical principles, and is only encountered when particles are pushed to extreme densities. When particles are packed closely enough, as they are during the collapse of a star, we maintain the Pauli exclusion principle, meaning that particles in the same space can not be in the same quantum state. This means that particles, as they are shoved together, will have to make amends by increasing their energy to reach higher quantum states. The force pushing these particles to those higher states then pushes back against the collapse of the star, and gives what we then call degeneracy pressure. This state of matter is then referred to as degenerate matter.
There are two limits that are relevant to this discussion. The first is the Chandrasekhar limit: \(1.38 M_\odot\). As a star collapses on itself, the core of the collapse will begin to resist the collapse with degeneracy pressure. This pressure pushes back on the in-falling matter - forcing it to rebound off of this core. If the remaining matter is less than the Chandrasekhar limit, it is supported by electron degeneracy pressure - that is, degenerate pressure created by electrons. This forms a WD. For masses greater than this, the star will continue to collapse into a NS, where it is then supported against further collapse by neutron degeneracy pressure. The limit on the support of this is much more uncertain, with the Tolman-Oppenheimer-Volkoff limit putting it in the range of \(1.5\text{-}3.0 M_\odot\). Further collapse will form a BH.
Observing these two limits during collapse, a NS, the most compact type of star in the universe, is left over from the original star.
The reason NSs are ‘in fashion’ nowadays, and the reason a near-30-year NS-lull was broken in the late 60’s, is because of the discovery of the pulsar. A pulsar is, from our perspective, a regularly pulsing star in the sky. Sometimes these can pulse up to 716 times a second. A pulsar is not little green men sending data packets, but in fact rotating neutron stars.
Neutron stars emit light in a hollow cone shape from their magnetic poles. As is also true with the Earth, pulsars have their magnetic pole offset from their spin axis - this means that as they spin, the beam of light moves across the universe, passing over us once every spin, much as a lighthouse illuminates a boat on every turn.
Equation of state
There is one uncertainty in the research of neutron stars that remains as controversial as ever: the equation of state of dense nuclear matter. The equation of state is simply a specification of the set of relationships between such variables as mass, density, pressure, temperature, energy, and so on. As nuclear matter at densities of \(\gtrsim 10^{13}\) g cm\(^{-3}\) are rather difficult to create in experiment, physicists have used neutron stars as (very) remote laboratories to test the dense matter equations of state.
As there are an infinite number of equations of state in a finite number of categories – called families. “Constraints” are something applied to these families to reduce their size. Making observations of neutron stars (i.e., looking at them) gives us information about them, so that we can make inferences on the true equation of state of a neutron star, and thereby eliminate extraneous models from our theory. Pulsar timing to measure Keplarian parameters, allows us to fill in equations which can give us incredibly precise measurements on a NSs mass. The radius, however, is much more difficult to constrain, but these parameters together form a relationship that is the only aspect of the equation of state relevant to the total structure of a NS.
Different families of equations of state can be seen below, plotted on a mass-radius graph.
From what may be evident from this picture, we still have a long way to go in terms of understanding the way matter behaves at extremely high density. One of these mysteries lies in the NS crust, at densities of \(\sim 10^{14}\) g/cm\(^3\)
Neutron Star Crust
The NS crust is about 1 km thick, and varies from densities of practically zero in the outer atmosphere, to \(\sim 10^{14}\) g/cm\(^3\) at the inner mantle. Despite being a fraction of the total mass of a neutron star, the crust has a much larger share of effects on the observations of neutron stars, and therefore is incredibly important to understand in making inferences about NSs and their various properties (see this paper for an excellent review).
This extremely high density causes matter to behave completely unlike it normally would. As done previously, a good way to describe this environment is to start from the surface of the NS, and go down. Starting from the outer crust of a NS, a density of about \(10^4\) g/cm\(^3\) produces matter which consists of a BCC lattice of iron plasma. Going further down, electron capture becomes more favourable, producing nuclei which are more neutron-rich. Reaching the inner crust, we reach the neutron-drip density of \(\rho_\text{ND}\sim 10^{11}\) g/cm\(^3\), which causes the neutrons to quite literally “drip” out of the nuclei - producing free degenerate neutron. Passing the inner crust and entering the mantle, at around nuclear saturation density of up \(\rho_0 = 2.8\times 10^{14}\) g/cm\(^3\), we begin to see some incredibly strange effects…
Nuclear Pasta
As we continue to move deeper in the crust of a neutron star, and as \(\rho \rightarrow \sim 10^{14}\)g/cm\(^3\) and beyond, we encounter what is called “Coulomb Frustration”. This is what happens when nuclear forces and Coulomb repulsion begin to balance out, so that particles becomes frustrated. This term is used in many different disciplines for describing the existence of many different low-energy states. A proton, for instance, may have a difficult time deciding if it wants to join a cluster of other protons for the nuclear attraction, or stay far back for the Coulomb repulsion, and so it is frustrated. This effect leads to the formation of interesting topologies known collectively as Nuclear Pasta.
As we move through the top of the mantle, nuclei begin to become non-spherical. They begin to form into long rods of nuclei surrounded by the neutron sea - what are called Nuclear Spaghetti. Even deeper into the mantle, these rods fuse together, and form slabs, which are then called Nuclear Lasagna. Deeper still, at greater densities, the lasagna fuses to create uniform nuclear bulk matter with long holes of neutron superfluid - the Nuclear Bucatini. Finally, we are left with simple holes of neutron superfluid, giving the term Nuclear Swiss Cheese. At even greater densities, the nuclei “dissolve” into the neutron sea, transitioning into the superfluid core.
Current Nuclear Pasta research
What follows are examples of some current areas in nuclear pasta research. As astronomers learn more about the importance of pasta in influencing various properties of the neutron star, these research areas will continue to see expansion.
Topology
The topology of Nuclear Pasta refers to its shape. The different phases, their prevalance, and precisely how they are shaped are all subjects of current research. One particularly important question in the topology of nuclear pasta refers to the frequency of “defects”. For example, in nuclear lasagna, between the sheets there are occasional helical ramps - things that are shaped like parking-garage ramps, and form passages between two adjacent nucleonic planes. These very occasional defects in this phase of the pasta can have large implications in terms of the larger effects on the neutron star’s crust and even observational properties, as electrons can scatter of of these ramps, which can also slow down the cooling rate of neutron stars. Other topics of research in this area include defining the shape of the matter at the transition point between two phases of nuclear pasta, as well as finding new phases altogether - e.g., gyroid phase, waffle phase, spongelike phases, to name a few.
Conductivity
Crust conductivities, both thermal and electrical, are very important for determining greater properties of a neutron star. Thermal conductivity of the NS crust is important for determining the temperature profile of the entire star, and controls the cooling rate of the crust - something that is observable.
Current research in this subject relates to performing more accurate approximations of the conductivities of the NS crust, based on the most up-to-date research on the crust and pasta structure. As discussed in the last section, impurities in the nuclear pasta have a large effect on the electrical and thermal conductivities in the mantle of the crust - in that the more disordered (more defects like the helical ramp) the nuclear pasta is, the lower the conductivity.
Observational evidence
The biggest challenge in nuclear pasta research, is the unfortunate question as to whether it exists or not (is this all for nothing?) Fortunately, there is a growing body of evidence providing links to the existence of exotic matter in the NS crust, as well as in supernovae (where similar densities are reached). It could be that researchers who have dedicated decades of their lives to learning about nuclear pasta are inclined to not publish papers on evidence against the existence, but I will still give a complete outline of the many different ways academics have inferred the presence of pasta in the universe from observation.
As discussed in the previous section, nuclear pasta has a primary role in determining the conductivity of a NSs crust. This leads directly into cooling effects, which should be evident in observations of certain NSs. Sprouting from this idea - specific comparisons to measurements of stars have occurred, such as the observation of rapid cooling in the crust of a NS in the supernova remnant of Cassiopeia A, to provide some evidence for pasta’s existence.
In addition to this, there have been several papers relating giant flares to pasta (a giant flare being a short burst of gamma ray radiation from a very magnetic neutron star). The author of these suggests that the oscillations of these bursts are related to properties of the NS crust, which can tell us things about whether or not pasta is present.
There have also been a few papers about the effect of pasta on the spins of NSs. One author suggests that the evident limit on how fast X-ray NSs can spin (NSs which emit X-ray light) is due to an amorphous layer of nuclear charge - i.e., pasta. Additionally, one model for “glitches” (sudden changes in a NSs spin) is related to pasta here.
Interatomic potential
The interatomic potential is simply a function of potential energy describing the interaction between two particles. The interatomic potential is the single most important specification of a molecular dynamics simulation.
In these simulations, the experiment conductor selects the potential based on both the accuracy and efficiency of the potential. An accurate potential is of course important to make an accurate simulation of whatever they are testing, but efficiency forms the bottom line on what you can and cannot simulate - an efficient potential also allows you to perform larger simulations, which may help improve the accuracy of whatever parameters they emulate.
What follows is a list of a few different potentials, to give you an idea of what they look like, and the various parameters associated with them. The last two potentials will be used later in this notebook for the purposes of simulating pasta.
- Coulomb potential \(U_{ij}=\kappa_e\frac{q_iq_j}{r_{ij}}\) where \(\kappa_e\) is Coulomb’s constant, and \(q_i\) is the charge of particle \(i\)
- Models the interaction between two charged particles in a vacuum.
- The Lennard-Jones (LJ) potential \(U_{ij}=4\epsilon\left[\left(\frac{\sigma}{r_{ij}}\right)^{12}-\left(\frac{\sigma}{r_{ij}}\right)^{6}\right]\) where \(\epsilon\) is the dielectric constant and \(\sigma\) is the (larger) distance at which \(U=0\)
- Simple model for the interaction between two neutral atoms in a vacuum.
- This is the most extensively used potential in molecular dynamics simulations.
- The Yukawa potential \(U_{ij}=A\frac{\exp(-\kappa r_{ij})}{r_{ij}}\) where \(A\) is the amplitude of the interaction, and \(\kappa\) is the inverse debye length
- Models a screened coulomb potential, such as in a plasma
- The Gaussian potential \(U_{ij}=-A \exp(-B r_{ij}^2)\) where \(A\) and \(B\) are the two coefficients of the interaction
- Models the interaction of particles in a Gaussian potential well
Computer prerequisites
Selecting a simulator
Simulations of nuclear pasta are done with what is called “molecular dynamics” programs. These programs create lists of point particles, and then calculate their trajectories as they interact with eachother over a series of timesteps. This process emulates actual molecular dynamics fairly well.
You have the option to do a classical molecular dynamics simulation, which is what we will attempt later on in this tutorial, or a quantum molecular dynamics simulation, which is more accurate, albeit more complicated and slower to perform.
There are many, many different molecular dynamics (MD) simulators out there. Here is a list of some of the most popular free options, with a brief description of what they are known for.
- Gromacs
- Open-source, extremely popular MD program for protein analysis. Used in the Folding@Home project. For its purposes, it is the fastest MD program in existence, due to very specific optimizations. Only has Lennard-jones and Buckingham type potentials built-in, but can be reconfigured for additional potentials. Parallel and GPU capabilities are built-in.
- LAMMPS
- Open-source, very popular general-purpose MD code. There are 111 different interatomic potentials built-in, all of which can be combined into new potentials. LAMMPS will be used in this tutorial for pasta simulations. Parallel and GPU capabilities are built-in.
- CHARMM
- Closed-source, paid MD code mainly for use in simulating biomolecular systems. It uses a specific family of potentials, also called CHARMM, which can also be used in other MD codes. Parallel and GPU capabilities are built-in.
- NAMD
- Open-source MD code for use in simulating large biomolecular systems. Differentiates itself from other MD codes for its use in the most parallel MD simulations out there. Parallel and GPU capabilities are built-in.
- AMBER
- Open-source MD code for use in simulating biomolecular systems. Similarly with CHARMM, AMBER is also a family of potentials that are used exclusively in this software.
For the purposes of this tutorial, we will be using the python-interface of LAMMPS, which is a very popular (\(\Rightarrow\) well documented) general-purpose molecular dynamics code, and is completely preinstalled on CalculQuebec’s compute servers.
To be able to follow along with this tutorial, you will have to have a python-enabled LAMMPS available, so that you can follow along.
Installing LAMMPS
I assume you also want to install LAMMPS on your local machine, before you get involved large pasta simulations aboard a compute cluster. Your first step is to download a tarball from here - download the stable version. The one used in the making of this document is the stable version of February 16, 2016, though I assume this tutorial will still suffice in the near future.
I will briefly run through a perfect-scenario installation in the following steps. In all other cases, you will probably want to have a good read-through of the Getting Started page. Note that you do not need to have root access to your machine to follow these build instructions!
Here are some assumptions I make about your machine you are installing to, in addition to your motives for this installation. If any of these assumptions are violated, I leave a note for how to make amends:
- You are working on a Linux or Mac operating system.
- *Else: Windows, look here. *
- You only want to use the Python-interface of LAMMPS.
- Else: Consult the Getting Started page for installation instructions on how to build the LAMMPS executable, enabling you to work with LAMMPS input files.
- You do not want to use your GPU.
- *Else: Consult the Getting Started page on building additional packages. *
- You do not want to install any of the optional packages.
- Else: Consult the Getting Started page.
- **You already have the following things installed: gcc, fftw, mpich **
- Else: Download, build and install them. fftw and mpich, like a majority of open-source \(*\)nix libraries out there, can be built and installed with the usual set of commands:
- Else: Download, build and install them. fftw and mpich, like a majority of open-source \(*\)nix libraries out there, can be built and installed with the usual set of commands:
Installation steps:
- Open
sh
, or a close variant, andcd
into the folder you downloaded the tarball into, then issue the following command to unzip it.
cd lammps*/src
into the newly created LAMMPS folder, and go into thesrc
directory.- You are now going to build LAMMPS, as a “shared library”. This will produce library files in the form of
.so
, rather than.a
, which will enable LAMMPS to be loaded at execution-time of a program, allowing you to use it in python. Next is where things get trickier. Issue the following command to see a long list of all possible build options.
- Scroll through the bottom of this list, until you find a list of various options entitled similar to “one of these from src/MAKE/MACHINES”. This is a list of different families of operating systems that have customized make files. If you find one that matches your machine, include it in the next step. Else, more generically, you can use ‘
mpi
’ for a parallel-enabled LAMMPS (\(\Rightarrow\)faster), or ‘serial
’ for non-parallel. - This is the make-or-break step. Issue the following command, where “machine” is the phrase you found in the previous step:
If this does not work, consult the errors section below.
cd ../python
into the python-interface sub-directory of the main lammps folder. Issue the following command.
- If this runs without issue (you can run “
echo $?
” to check - it should print out a 0), you should be good. Now you can move on to the LAMMPS-focussed parts of the tutorial.
Errors
- Make errors
- Do you satisfy all the dependencies? i.e., gcc, mpi, and fftw?
- Did you install using the correct machine specification? If this does not work, try just using ‘
mpi
’ or ‘serial
’ as your machine. - A common error is LAMMPS trying to find an fftw.h file. In this case, modify your ‘
src/MAKE/Makefile.machine
’ file so thatFFT_INC = -DFFT_NONE
, as specified in the getting starting page.
- Python import errors
- *e.g., Python does not recognize the module ‘
lammps
’.- Ensure that LAMMPS was compiled as a shared library (i.e.,
mode=shlib
), and that step 6 ran to completion. - For all other errors,…
- Ensure that LAMMPS was compiled as a shared library (i.e.,
- *e.g., Python does not recognize the module ‘
- All other errors
- Consult the Getting Started page
- Search for a question on the LAMMPS mailing lists archive
Nuclear Pasta Simulation
Selecting a potential/force field
Several potentials are described in the theory section. For the purposes of this tutorial, we will use an overlay between two Gaussian potentials and one Yukawa potential. For the purposes of this tutorial we follow the exact same formulation as in 1 is followed without the presence of neutrons: \(V_{ij} = A_1 \exp(-B_1 r_{ij}^2) + A_2 \exp(-B_2 r_{ij}^2) + A_3 \frac{\exp(-\kappa r_{ij})}{r_{ij}}\)
We take \(A_1 = 110\)MeV, \(A_2=-2\)MeV,
Setting the simulation
Import lammps into your IPython environment:
In the following lines, we will issue commands in the form of
This is so that when you begin to use a compute cluster, you are comfortable with using input scripts, and don’t need to use a python interface. Simply remove the python function wrapper from the input string, and copy that string into an input file.
First, we set some basic parameters about our simulation. We set the units to metal, which means we have the following units specification:
mass = grams/mole
distance = Angstroms
time = picoseconds
energy = eV
velocity = Angstroms/picosecond
force = eV/Angstrom
torque = eV
temperature = Kelvin
pressure = bars
dynamic viscosity = Poise
charge = multiple of electron charge (1.0 is a proton)
dipole = charge*Angstroms
electric field = volts/Angstrom
density = gram/cm^3
Let’s set up our simulation with these units, and specify how many dimensions we wish to work in.
We set the simulation boundary to be ‘periodic’ in all three dimensions, which means the particles move across from the higher side of the (x,y,z)-axis to the lower side of the (x,y,z)-axis, and particles can interact with particles on the other side of these boundaries as well.
We then set up our simulation as an atomic one, with a distance of \(0.3\times 10^{-5}\) fm past the force cutoff
Now we set up our placement of our particles in a bcc lattice with a spacing of \(a=3\) fm. We create a cube of 11 BCC \(\text{units}^3\), and fill the first 10 BCC \(\text{units}^3\) with
Let’s set up some variables for future calculations. The constants given in (*) are those used in (Horowitz et al., 2005)
Now we actually place our particles in the box, and then calculate how many total we have.
Let’s compute a few more variables for future calculations:
Now we can set up our potential. The first step is to define which functions we are going to use. In this case, we use an overlay (‘hybrid/overlay
’) between two Gaussian potentials and one Yukawa.
We input more parameters to this potential. We should now have the full potential set to \(U_{ij}=-a \exp(-r_{ij}^2/\Lambda^2)-b \exp(-r_{ij}^2/2\Lambda^2)+c\frac{\exp(-\kappa r_{ij})}{r_{ij}}\)
We create a computation called “new
”, which calculates the average temperature (“temp
”) of all particles in the simulation. We then use the “velocity
” command to set the temperature of all particles to 0 - meaning that all particles are immediately given 0 velocity.
We now begin the application of NVE integration to our particles, applying optimizations to our system so that the net trajectory is static.
Set up the current simulation timestep to be equal to \(\Delta t=5\times10^{-14}\)ps, which is small enough to prevent our atoms from shooting far outside of the simulation box within one timestep.
Note: if you receive an error that “atoms have been lost”, or the kernel dies in the middle of running a simulation, you need a weaker potential, heavier particles, or a smaller timestep.
Set up our temperature computation to be run every 1000 timesteps.
Now we are ready to simulate.
What follows is a series of “run
” - the command that steps forward in time while applying classical laws of motion, as well as “velocity
” commands which cool our particles to reasonable levels.
Now that we have frozen and refrozen our particles many times, they should be reasonably spread out in our box. We can now take them up to our desired temperature:
If you supplied favourable conditions for the formation of pasta, it might exist currently in the simulation. Now we can dive into the analysis, to learn a bit more about what we have created:
Analysis
Data Dump
Assuming the above simulation has run to completion, we can now perform whatever analysis we like on the resultant placements and velocities of our particles. The first step to doing this is to grab the coordinates of all the atoms, and store them in coords
. We will also dump the coordinates to a time-tagged dump file, to back up in case of a large simulation that you just conducted.
This is all we need to consider the topology of our simulation. Next, for consistency checking, let us do a simple matplotlib 3D plot of all our particles.
For the following 3D visuals, you need to have mplot3d
installed with your python distribution. Other possibilities of viewing a 3D plot of the particles include plotly
and VPython
.
(Despite its initial appearance, the above sample plot contains clumps of matter, which form the gnocchi phase shown in the plot further below.)
Voxel Mapping
Following along with Lang, et al. (2001), sections 3 and 4, we will be able to pretend as though our list of point particles is a surface, and calculate topological parameters appropriately.
To summarize very briefly what we are about to do, we will take a histogram of the different neighborhood configurations of the particles (how they are arranged relative to their adjacent particles), and then be able to infer how the entire surface looks.
The first step is to create a list of cells in the simulation - indicated by a vertex at each \((\Delta i,\Delta j,\Delta k)_{i,j,k\in N}\)
Where \(\Delta\) is a specified step size. Once we have this list of vertices, we can assign each of them a charge density based on our distribution of protons and an assumed Gaussian charge density surrounding the protons. We compute this as follows:
Now that we have a charge density at each point, we can turn the points into binary values (points called ‘‘voxels’’). If a point has a charge density past a certain threshold (which for the purposes of this notebook, we determine based on a reasonable fractional volume), then the corresponding voxel is given a value of 1. Otherwise, it is 0.
These voxels produce an image which is a much better representation of the topology of our structure:
(Sample voxel plot showing gnocchi phase with \(\sim\)4100 particles over \(30^3\) voxels)
Next, we begin to get into the neighborhood calculations. Something called ‘‘grey-tones’’ is an array which will serve to capture all of the neighborhood configurations in this image. The grey-tones are produced by a one-to-one mapping from the voxels. Each possible grey-tone value represents one possible 2\(\times\)2\(\times\)2 cubical lattice configuration, as there are eight vertices, each of which can be filled or not filled - these can be converted to integer format at each vertex via powers of two. The grey-tone value at each vertex represents the type of structure the neighborhood it represents is in (e.g., are its surrounding voxels 1’s or 0’s). Binning this over all cells gives the histogram, which is completely representative of the shape of our configuration.
The grey-tone equation is \(g_{i,j,k} = \sum_{l,m,n=0}^{1} 2^{l+2m+4n} v_{i+l,j+m,k+n}\) where \(v_{i+l,j+m,k+n}\) is the voxel value at point \(i+l,j+m,k+n\). For example, when we have all eight voxels in this neighborhood filled (see below), we achieve a value of \(255\).
These grey-tones are then binned into a histogram, thereby giving the frequency of each possible neighborhood configuration. For example, as seen in the below representation, we have that \(h_{255}=\) the number of times there appears a full neighborhood configuration (which gives a greytone value of \(255\)).
This histogram contains all the information about our structure that we need. It tells us how many times each configuration occurs in our simulation structure, which can then be used to calculate the occupied volume, surface area, and estimated density of the integrals of curvature. The curvature of our structure is what we need to classify it as different types of pasta.
First we calculate the estimated density of the integral of the total curvature. We do this with the surprisingly simple formula of \(K_V = <h,v>\) where \(v\) is an independent parameter, calculated in this article.
This parameter is proportional to the total curvature of our structure, and tells us the basic topological characteristics. We now calculate the estimated density of the integral of the mean curvature, which is given a much longer equation, described in Lang, et al. (2001).
Here are some conclusions we can form about our simulation based on these values, using the categorization as designed in Schneider et al. (2013). Make note of the additional pasta names to specify categories related to increased connectedness/thickness of the pasta. Note also that the quality of this definition depends on the resolution and parameterization of your analysis - it does not necessarily reflect what is actually contained in your simulation.
Additional Topics
Plasma Coupling Parameter
\(\Gamma\), the plasma coupling parameter, is a measure of the mean potential energy divided by the mean kinetic energy for particles in a one-component plasma (only one particle). This variable, which only depends on the temperature in a fixed simulation box, can tell you precisely what the melting point of a lattice structure will be, something that is easy to demonstrate in our molecular dynamics simulation. We have the following for its formal definition: \(\Gamma=\frac{e^2}{4\pi \epsilon_0}\frac{1}{a k_B T}\) where \(T\) is the temperature of the particles in the simulation box, \(k_B\) is Boltzmann’s constant, \(e\) is the elementary charge, \(\epsilon_0\) is the permittivity constant, and \(a=(4\pi n/3)^{-1/3}\) is the interparticle distance, for number density \(n\) of particle in interest.
The following property is of interest: When \(\Gamma\) is near to \(175\), the one component plasma in question will behave crystallize into a lattice. As discussed in the next section, we have an easy way to test for this, with something called the radial distribution function.
Radial Distribution Function
The radial distribution function, denoted \(g(r)\), is simply a histogram of the distances between particles in a molecular dynamics simulation box normalized against a histogram for an ideal gas.
This distribution tells you how often two atoms are within \(r+dr\) distance of eachother, in reference to the distribution of an ideal gas. Continuing on our previous topic, if you see sharp peaks in the histogram, this will tell you that your particles have formed some sort of regular structure, such as a lattice. What we can then do, is demonstrate that as \(\Gamma \nearrow 175\), we have that the plasma being measured will begin to show a spiked radial distribution function.
The following code can be rerun, with different values of gamma, in order to test different results.
Get coordinates from the LAMMPS simulation.
The following code does a simple interactive point plot of this simulation’s results.
Now, we calculate every single distance between every single particle (this part may hang).
It is important to note, that the reason there is a decrease in all of these diagrams for large \(r\) is due to the fact that the radial distribution was not taken between particles across the boundaries, and thus, the radial distribution function is slightly biased to smaller values of \(r\).
With \(\Gamma = 0.1\), we get
With \(\Gamma = 10\), we get
With \(\Gamma = 50\), we get
With \(\Gamma = 150\), we get
With \(\Gamma = 175\), we get
With \(\Gamma = 200\), we get
With \(\Gamma = 2000\), we get
As is evident in the plots above, when \(\Gamma\) reaches values near \(175\), the radial distribution function becomes peaked in regular intervals, indicating that a regular structure - a crystal lattice - has formed.
Leave a comment