Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Picture this: star clusters, nebulas, and other interstellar phenomena created out of whole cloth unsupervised, by a computer. It might sound like the description for a futuristic holodeck, but researchers at the University of Edinburgh’s Institute for Perception and Institute for Astronomy have designed such a system with the help of artificial intelligence (AI).

In a paper published on the preprint server Arxiv.org (“Forging new worlds: high-resolution synthetic galaxies with chained generative adversarial networks“), they describe an AI model capable of generating high-resolution images of synthetic galaxies that closely follow the distributions of real galaxies.

“Astronomy of the 21st century finds itself with extreme quantities of data, with most of it filtered out during capture to save on memory storage,” they wrote. “This growth is ripe for modern technologies such as deep learning. Since galaxies are a prime contender for such applications, we explore the use of [AI] to produce … galaxy images.”

Core to the team’s machine learning architecture is generative adversarial networks (GANs), two-part neural networks consisting of generators that produce samples and discriminators that attempt to distinguish between the generated samples and real-world samples. It’s not a stretch to characterize GANs as wunderkinder of AI algorithms; they’ve been used to discover new drugs, create convincing photos of burgers and butterflies, and even produce artificial scans of brain cancer.


MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here
AI galaxies

Above: Additional samples produced by the AI system.

The proposed galaxy-generating system was made up of two five-layer GANs: Stage-I GAN and Stage-II GAN. The first generated low-resolution images (64 x 64 pixels), while the second converted them into higher-resolution images (128 x 128 pixels) using a technique called super-resolution. In practice, the researchers noted, the Stage-II GAN hallucinated missing pixels, targeting realism rather than accuracy.

To “encourage” the generator in the stage Stage-II GAN to spit out images of synthetic galaxies similar to their upscaled, real-image counterparts, the paper’s authors introduced a “dual-objective function” that computed an error metric between resolution-enhanced images and real galaxies. The result was a greater number of generated samples retaining “rarer” characteristics of the galaxies, such as spiral arms.

The researchers trained the AI system on a PC with a single Nvidia GTX 1060 GPU, feeding it full-color images of stars and planetary bodies from the Galaxy Zoo 2 dataset, a crowd-sourced astronomy project. And they considered four properties in evaluating the results: ellipticity, or the degree of deviation from circularity; angle of elevation from the horizontal; total flux; and the size measurement of the semi-major axis (one half of the ellipse’s longest diameter).

In the end, the model produced “physically realistic” images of galaxies closely resembling the real things, the researches wrote. They posit that their system might be used to augmented databases of real samples, in effect serving as a data source for deep learning models — such as those designed to classify and segment galaxy images — that require a large number of training samples.

“Generative models that are able to create physically realistic galaxy images have many practical uses,” they wrote. “[Our] work demonstrates the potential of GAN architectures as a valuable tool for modern-day astronomy.”

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.