HomeLa Riflessione di Giancarlo Elia Valori

Artificial Intelligence and Physics Advances in the Field of Gravitational Waves (part 2) and the Alternative of Open Source Science

Artificial Intelligence and Physics Advances in the Field of Gravitational Waves (part 2) and the Alternative of Open Source Science

The Laser Interferometer Gravitational-Wave Observatory (LIGO) consists of ten subsystems, one of which is the Data and Computing Systems (DSC). The d

More honest, rational people now speaking up against Big Brother US’ monitoring, surveillance
The Race to the Stars. The current state of missile technology
Chinese modernization has inspirational significance for the world

The Laser Interferometer Gravitational-Wave Observatory (LIGO) consists of ten subsystems, one of which is the Data and Computing Systems (DSC). The data obtained by LIGO not only include the findings from the laser interferometer’s gravitational wave detector, but also contain various independent detectors and recorders that monitor the detector’s environment and the state of its equipment, such as temperature, atmospheric pressure and wind; conditions such as heavy rain, hail, surface vibrations, sound, electric fields and magnetic fields, as well as data on the state of the detector itself such as the position of the plane mirror and lens inside the gravitational wave detector.

In terms of data acquisition, e.g. the LIGO in Hanford (Washington State), and the H1 and H2 Data AcQuisition (DAQ) interferometers recorded a total of 12,733 channels, of which 1,279 were fast channels.

The upgraded LIGO is designed to record data acquisitions greater than 300,000 channels, including about three thousand fast channels. This is a typical problem in big data analysis and processing, which requires powerful computing resources and advanced algorithms to effectively process such a huge amount of data.

Adaptive filtering technology is used in the search for gravitational wave signals. This is a technology based on waveform analysis, which requires the creation of a reasonable physical model of the source of gravitational waves, and the generation of thousands of processing steps based on the use of these models to match signals in gravitational wave data to find relevant events.

Moreover, unlike telescopes, gravitational lenses distort objects into blurred rings and arcs and it is therefore quite difficult to understand them. Applying AI neural networks to the analysis of gravitational wave images will be ten million times faster than the original method. The AI neural network can discover new shots and determine the properties, mass distribution and magnification levels of background galaxies. Neural networks will help us identify interesting objects and analyse them quickly, which will provide more time to explore and ask questions about the universe.

As things stand, there are at least several aspects worthy of attention when applying Artificial Intelligence to the analysis and processing of gravitational wave big data.

In “supervised learning” the matched filter method must know the waveform of the signal. This time, the analysis of the gravitational wave deformation data must match the waveforms in the huge library of shapes available. This is obviously a process with an enormous computational workload. How to improve efficiency and reduce the computational load? The consumption of resources is undoubtedly worthy of in-depth study.

In the “unsupervised learning” of gravitational wave detection, instead, the waveforms of a large number of events are unknown. For supernovae and rotating neutron stars, the current accumulation of astronomical observations cannot provide a theoretical estimate of the intensity of the gravitational waves they release, which requires the use of “unsupervised learning”, i.e. algorithms to discover unknown patterns in gravitational wave data.

In the “integrated learning” strategy, there are other types of gravitational waves, such as continuous and primordial ones, which are different from the gravitational waves detected as in the double black hole merger, mentioned in the previous article. They have not been detected yet, as in the case of continuous gravitational waves from rotating neutron stars: besides requiring a higher detector sensitivity, extremely high requirements are also placed on the data analysis capabilities.

Astronomical research is too far removed from ordinary people and, in the United States and in the rest of the world, people often complain that the costs are too high. Although research into gravitational wave data analysis technology has no direct commercial value, the migration of technology to the necessary algorithm can be considered in the future and technology is being developed for the proper analysis and processing of big data. The latter, however, can be applied to other commercial or purely academic fields to produce value through research.

Most gravitational wave data analysis uses one-dimensional signal processing technology, which can be transferred to spectral data analysis, sensing data analysis and general data analysis.

Indeed, as is well-known, AI technology has been used for a long time in space and non-space exploration, including computer vision, speech recognition, natural language processing, machine learning, etc. Detectors, however, also help us obtain images, information and data from the universe and then transmit them. As human beings improve their ability to understand the universe, Artificial Intelligence will play an increasingly important role in the future, and the space exploration science and the refinement of AI technology will eventually benefit human society as a whole.

Thinking of Newton, we would say: “If you look at the stars, you will see an apple. If you see an apple, you will study its origin. If you study its origin, you will have a law. If you have a law, you will want a conclusion”. It often happens, however, that a conclusion can end up in a contradiction, and if we get into a contradiction we will have to look at the stars again.

Space exploration, AI technology and human society are probably in this dialectical relationship of renewal – just think of the theory of relativity and quantum physics – that continues to stimulate the imagination and the power of development of the human mind and civilisations.

Nevertheless, we must be careful not to stray into the grotesque, and I think we must deal with the topic without coming to the conclusion that one day Artificial Intelligence will confine humans into a zoo for protected species, as – hopefully – all this will always be created and managed by humans, who are still the sole solvers of the problems overriding standards and are the real constituents of the world in which we live.

With the right tools, AI methods can be applied to scientists’ workflows. As mentioned above, the aim is not to replace human intelligence, but to significantly improve it and reduce the workload. This research verified part of Einstein’s theory of relativity and the connection between time and space. To date it is also the starting point for research into gravitational wave astronomy. Even the uninitiated will be able to begin to understand the universe in depth and at a faster speed, including dark energy, gravity and neutron stars.

The contribution of this research is that, by combining the power of Artificial Intelligence and supercomputers, huge data experiments can be solved instantaneously, and all of this can be reproduced, rather than just limited to testing whether Artificial Intelligence can only be used to face and solve other important challenges.

Among other things, a solution to step up progress is the open source (open code) model, which is used by other research groups without any preclusion. In a nutshell, open source is the source code made freely available for possible changes and redistribution. Products include permission to use the source code, design documents or product content freely available to the public.

Open source is a decentralised software development model that encourages open collaboration. One of the fundamental principles of open source software development is peer production. The use of the term originated with software, but has expanded beyond the field of software to cover other content and forms of open collaboration. The open source movement in software began as a response to the limitations of proprietary code.

a cura di Giancarlo Elia Valori

Commenti