Thursday, July 24, 2014

Pieter de Buck - Week 4/5 - Duke University

Hi everyone, my name is Pieter and these are weeks 4 and 5 at Duke.

Again, I have learned and achieved a lot this week. In both the programming and physics sides of my research I have made great progress. I have had three sessions these past weeks with Dr. Bass and a Duke undergrad, JP, where Dr. Bass teaches us certain things that are relevant to our projects. He talks mostly about the computer model that we use to simulate particle collisions (UrQMD) , which he made. He talks to us about the theories in this field of physics, how they were discovered, the physics equations behind them, and then explain how one would incorporate these theories in a computer model, which in most cases is not as easy as just plugging in the equations. This is because these equations are huge and time-consuming. A nice example of one of these equations is the Boltzmann equation, which denotes the expansion of gas as time passes.
Boltzmann Equation

This calculation only handles two separate particles, to simulate the ~5000 particles that come out of a lead+lead collision this equation would grow unbelievably fast. Dr. Bass showed us this equation for a system with three particles, it spanned about four full pages. This is obviously not a good solution to use in a computer model, since it takes a lot of computer power even in the original equation. So when Dr. Bass and others made this UrQMD model, they had to employ some tricks to make the program run at an acceptable rate. These tricks are kind of in depth but they include only working with two particles at a time, and only doing calculations when a collision between two particles happens.

We also talked about the idea behind the model, because there are a few things that are weird about it. The most glaring weirdness is the fact that Dr. Bass' research group managed to create a computer model to simulate a phenomenon that has never been observed, and probably never will be. So how are they able to model this Quark-Gluon Plasma after experimental data? First a comment on experimental data from particle colliders such as the RHIC in Long Island, and more recently the LHC at CERN (France/Switzerland). The problem with looking for Quark-Gluon Plasma (QGP) with these colliders is that they can only detect the end state of the collision, when all the particles are done scattering. QGP is supposed to appear during an incredibly short amount of time, and in a very small area, making it impossible to observe. This is the reason for the creation of computer models that simulate QGP, they can use any timescale and "zoom in" infinitely. So to accurately model QGP from experimental data, we have to look for clues and proofs about the QGP in the end state of collisions at colliders. Dr. Bass has given us an in-depth talk about some of these "observables". But this still leaves us with the problem that it is impossible for us to create some kind of equation for the creation of QGP, so we resort to model the clouds of particles that result from one of these collision with better known, verifiable, phenomena. These can include gases, liquids, plasmas. We can use the equations of these states of matter and apply them to the particles in the simulation, and see if they give us the same end result that we see in experiments. It turns out the QGP can be modeled with a hybrid of gas and liquid modeling, using both states at different times and parts of the model. The gas part of the model is seen in the aforementioned Boltzmann equation. With a set of equations directly from other parts of physics we have a model that, after some tweaking, corresponds to the findings at RHIC and LHC. So that means that we can turn back the time and see what is going on during the creation of QGP, which is the ultimate goal of the model.

On my side of the project, the handling of output with computer scripts, I have also made progress. When plotting histograms I now "normalize" the graphs such that the integral (in the case of a histogram this is just to total area of the bars) always equals the same thing, irregardless of the amount of particles in a simulation. So basically everything has the same scale. I have also assigned weights to the data dependent on the value, so that irregardless of the size or range of the simulation you should expect the same graph. Here is an example of a normalized graph with a log scale for the y-axis.

The transverse momentum of the particle of the beam (i.e. momentum perpendicular to the particle beam) in GeV. Note that the log scale on the y-axis means the actual graph is exponential.



Picture of my part of the office, double screens!


I hope I was able to explain my thoughts clearly, it is really hard to condense all this theory in one blog post.

Pieter





No comments:

Post a Comment