Algorithm helps speed up simulation of vast, complex universes

UC Riverside astrophysicist Simeon Bird explains
By Iqbal Pittalwala | Inside UCR |

Simeon Bird, an assistant professor of physics and astronomy at UC Riverside, is a member of a team of astrophysicists that has used machine learning to simulate the universe with high resolution in a thousandth of the time conventional methods would take. 

Dr. Simeon Bird poses, smiling in front of his computer
Simeon Bird
 
 

The researchers uploaded models of a small region of space at both low and high resolutions into a machine learning algorithm that is trained to upscale the low-resolution models to match the detail of the high-resolution versions. Such training allows the code, which uses “neural networks,” to generate super-resolution simulations containing up to 512 times as many particles as the low-resolution models. 

The work, published in the Proceedings of the National Academy of Sciences, was led by Yin Li at the Simons Center in New York and Yueying Ni at Carnegie Mellon University. The research paper is titled “AI-assisted superresolution cosmological simulations.”

Bird, who joined UCR in 2018, studies machine learning, black holes, neutrinos, and dark matter. He said it was a privilege to collaborate on the project. He maintained the simulation code used to generate the training data. In this Q&A, he answers a few questions about the project:

Q. What is a neural network in artificial intelligence and how does it work?

A neural network is a very flexible model to fit any kind of data. You can think of it as a series of filters that show different interesting features of the input. Neural networks are trained to pick out specific interesting parts of the simulation and reproduce them. For our work this is done by training one network which tries to reproduce the simulation and one network which tries to find differences between the reproduction and the original. By playing these two networks against each other we end up with something very hard to distinguish from the original simulation.

Q. How did you get involved in this project?

I run a lot of very large computer simulations. This takes a lot of time. Nowadays it is very hard to make the simulation substantially faster. This type of machine learning offers the possibility of scaling simulations to ten times their current size, which would be very hard in any other way.

Q. What do you think this technology will make possible for astronomy research? 

This technology will ultimately enable much larger simulations. These simulations are necessary in the near future to make sure we have models to compare to upcoming much larger astronomical surveys.

Q. How does the algorithm learn how to upscale the low-resolution models to match the detail found in the high-resolution versions?  

It is quite difficult to explain this simply. We think it works because on large scales one part of the cosmic web looks very like another: once dark matter starts to collapse it forgets where it comes from.

Q. The code can take full-scale low-resolution models and generate super-resolution simulations containing up to 512 times as many particles. How can you be sure the upscaling isn’t generating unobservable “nonsense”?

For this model we were able to run a simulation directly with 512 times as many particles and check that it looks similar to the output of the upscaling. When we start using this on larger problems, where we can't do that — which is the point! If we can just run the larger simulation, there is no need for machine learning — we will run smaller simulations of parts of the upscaled simulation and check them.

Q. The team couldn’t get the simulation generator to work for two years. What were the obstacles?  

Machine learning is a rapidly progressing field. In the last two years there have been advances in how to train these models. We applied these advances and the previously impossible training problem suddenly became possible.

Q. Where else can this new technology be potentially used?

This new technology can hopefully dramatically increase the dynamic range of our simulations, allowing us to model individual galaxies at the same time as the large-scale distributions of galaxies on the sky.

Q. How can someone use this technology?  

This technology is freely available, built off open source technology. Anyone with a decently powerful computer and a graphics card could do something similar with enough patience.

Thumbnail photo: Milky Way Galaxy photographed by Spitzer Telescope. (NASA/JPL-Caltech/S. Stolovy; Spitzer Science Center/Caltech)

Let us help you with your search