Tribe Technologies
Trending >

NASA turns telescope data in music

A fun new project in Montreal has NASA converting data from telescopes into playable music, via the help of a Canadian composer.

NASA’s innovative project, which began in 2020 at the Chandra X-ray Center, has successfully transformed telescope data into music that can be played by humans. This initiative, known as “sonification,” converts digital telescope data into notes and sounds, offering a new way to experience astronomical data traditionally presented as images. The project involves the creation of a musical soundscape using data from NASA’s Chandra, Hubble, and Spitzer space telescopes.

Composer Sophie Kastner collaborated with NASA to develop these sonifications into playable music. Kastner’s approach focuses on small sections of the data, akin to creating vignettes, highlighting parts of the image that might be overlooked in the full sonification. This method has resulted in a new composition titled “Where Parallel Lines Converge,” which can be played by musicians, marking a unique intersection of science and art.

The pilot program centered around data from a region near the center of the Milky Way galaxy, home to a supermassive black hole. The composition not only represents a new take on real data from NASA telescopes but also continues the human tradition of drawing inspiration from the night sky to create art. The piece was recorded by Montreal-based Ensemble Éclat, conducted by Charles-Eric LaFontaine, at McGill University.

This project illustrates NASA’s ongoing commitment to innovative ways of interpreting and presenting space data, bridging the gap between science and art, and providing a new perspective on the cosmos​​​​​​​​.

NASA’s engagement with capturing and interpreting sounds from space has evolved significantly over recent years. The agency has been actively translating observational data from telescopes like the Chandra X-ray Observatory, Hubble Space Telescope, and the James Webb Space Telescope into audible frequencies. This process, known as sonification, allows for a unique auditory exploration of the universe, making high-energy phenomena like exploded stars and black holes accessible to listeners.

Notable milestones include the Perseverance Mars rover recording the sound of the Ingenuity helicopter’s flight on Mars in 2021, and InSight’s seismometer on Mars capturing sounds such as “dinks and donks” created by friction within the instrument, and a marsquake in 2019. These initiatives reflect NASA’s commitment to using innovative techniques to enhance our understanding of space phenomena, making these insights accessible and engaging for a broader audience​​​​​​​​​​​​.

We Hate Paywalls Too!

At Cantech Letter we prize independent journalism like you do. And we don't care for paywalls and popups and all that noise That's why we need your support. If you value getting your daily information from the experts, won't you help us? No donation is too small.

Make a one-time or recurring donation

About The Author /

ChatGPT is a large language model developed by OpenAI, based on the GPT-3.5 architecture. It was trained on a massive amount of text data, allowing it to generate human-like responses to a wide variety of prompts and questions. ChatGPT can understand and respond to natural language, making it a valuable tool for tasks such as language translation, content creation, and customer service. While ChatGPT is not a sentient being and does not possess consciousness, its sophisticated algorithms allow it to generate text that is often indistinguishable from that of a human.
insta twitter facebook

Comment

RELATED POSTS