Applicazioni dell’intelligenza artificiale nel settore spaziale

Penso sarebbe utile avere un thread in cui raccogliere materiale sull’impiego di intelligenze artificiali specializzate nel contesto spaziale - sia in ambito astronomico che astronautico.

L’idea sarebbe di concentrarsi su AI progettate per svolgere compiti specifici (analisi di dati astrofisici, controllo di sistemi di bordo, pianificazione autonoma di missioni, ecc.), evitando gli articoli più speculativi o futuristici (es. AI “senzienti” / AGI - Artificial General Intelligence spedite nello spazio profondo) o quelli che parlano di AI “generaliste” (es ChatGPT, Grok e simili).


Segnalo un primo articolo dal sito del JPL che spiega come la NASA stia valutando l’impiego dell’AI sui satelliti da osservazione terrestre.

In a recent test, NASA showed how artificial intelligence-based technology could help orbiting spacecraft provide more targeted and valuable science data. The technology enabled an Earth-observing satellite for the first time to look ahead along its orbital path, rapidly process and analyze imagery with onboard AI, and determine where to point an instrument. The whole process took less than 90 seconds, without any human involvement. […]

The first of a series of flight tests occurred aboard a commercial satellite in mid-July. The goal: to show the potential of Dynamic Targeting to enable orbiters to improve ground imaging by avoiding clouds and also to autonomously hunt for specific, short-lived phenomena like wildfires, volcanic eruptions, and rare storms.

Il paper relativo:

:scroll: Flight of Dynamic Targeting on CogniSAT-6 - Update

Dynamic targeting (DT) is a spacecraft autonomy concept in which lookahead sensor data is acquired and rapidly analyzed and used to drive subsequent observation. We describe the Low Earth Orbit application of this approach in which lookahead imagery is analyzed to detect clouds, thermal anomalies, or land use cases to drive higher quality near nadir imaging. Use cases for such a capability include: cloud avoidance, storm hunting, search for planetary boundary layer events, plume study, and beyond. The DT concept requires a lookahead sensor or agility to use a primary sensor in such a mode, edge computing to analyze images rapidly onboard, and a primary followup sensor. Additionally, an inter-satellite or low latency communications link can be leveraged for cross platform tasking. We describe implementation in progress to fly DT in late Spring 2025 on the CogniSAT-6 (Ubotica/Open Cosmos) spacecraft that launched in March 2024 on the SpaceX Transporter-10 launch.

Ping:
:satellite_antenna: [2024-03-04] Falcon 9 Block 5 | Transporter 10 (Dedicated SSO Rideshare)


E un secondo articolo direttamente dal sito di Google Deepmind che annuncia di aver realizzato una AI “spaziale” - AlphaEarth Foundations - che dovrebbe servire a integrare e analizzare quantità enormi di dati raccolti da satelliti diversi.

New AI model integrates petabytes of Earth observation data to generate a unified data representation that revolutionizes global mapping and monitoring […] Today, we’re introducing AlphaEarth Foundations, an artificial intelligence (AI) model that functions like a virtual satellite. It accurately and efficiently characterizes the planet’s entire terrestrial land and coastal waters by integrating huge amounts of Earth observation data into a unified digital representation, or “embedding,” that computer systems can easily process. This allows the model to provide scientists with a more complete and consistent picture of our planet’s evolution, helping them make more informed decisions on critical issues like food security, deforestation, urban expansion, and water resources.

Il paper relativo:

:scroll: AlphaEarth Foundations: An embedding field model for accurate and efficient global mapping from sparse label data

Unprecedented volumes of Earth observation data are continually collected around the world, but high-quality labels remain scarce given the effort required to make physical measurements and observations. This has led to considerable investment in bespoke modeling efforts translating sparse labels into maps. Here we introduce AlphaEarth Foundations, an embedding field model yielding a highly general, geospatial representation that assimilates spatial, temporal, and measurement contexts across multiple sources, enabling accurate and efficient production of maps and monitoring systems from local to global scales. The embeddings generated by AlphaEarth Foundations are the only to consistently outperform all previous featurization approaches tested on a diverse set of mapping evaluations without re-training. We will release a dataset of global, annual, analysis-ready embedding field layers from 2017 through 2024.

1 Mi Piace

Un messaggio è stato spostato in un nuovo argomento: Applicazioni speculative dell’intelligenza artificiale nel settore spaziale

Non mi pare che questo sia stato gia’ postato. E’ un intero sottosito, non un singolo articolo.

Ovviamente la guida autonoma dei rover e’ una applicazione piuttosto ovvia, e ha il suo articolo. Magari prima non la si chiamava AI:

Secondo me serve anche per la Luna perche’ i 2-3 secondi di lag nelle trasmissioni da terra non sono trascurabili.

2 Mi Piace

Un articolo di TechCrunch descrive un progetto congiunto di NASA e Google che punta a sviluppare una AI medica in grado di assistere gli astronauti su Marte.

One early experiment is a proof-of-concept AI medical assistant the agency is building with Google. The tool, called Crew Medical Officer Digital Assistant (CMO-DA), is designed to help astronauts diagnose and treat symptoms when no doctor is available or communications to Earth are blacked out.

The multimodal tool, which includes speech, text, and images, runs inside Google Cloud’s Vertex AI environment. […]

The two organizations have put CMO-DA through three scenarios: an ankle injury, flank pain, and ear pain. A trio of physicians, one being an astronaut, graded the assistant’s performance across the initial evaluation, history-taking, clinical reasoning, and treatment.

The trio found a high degree of diagnostic accuracy, judging the flank pain evaluation and treatment plan to be 74% likely correct; ear pain, 80%; and 88% for the ankle injury.

3 Mi Piace

Sulla stazione spaziale cinese è operativa una AI che è stata usata in supporto all’attività extraveicolare di due settimane fa.

L’articolo da China Daily.

Built on a homegrown open-source AI model, Wukong AI is designed to meet the requirements of manned space missions.It has developed a large language model tailored for professional fields and features a knowledge base centered on aerospace flight standards.

The Shenzhou XX crew members in orbit had already used this AI model to assist with preparations for extravehicular activities that took place on Friday. The mission commander, Senior Colonel Chen Dong, and astronaut Colonel Wang Jie asked Wukong AI for the work schedule a day before conducting their third spacewalk. The AI system quickly replied with relevant links and guidance. […]

Wukong AI’s participation marks the first time that China’s space station has applied and verified large-scale AI model technology. The model has been operating stably in orbit for one month, and the Shenzhou XX astronauts have given positive feedback, said Zou Pengfei, a center staff member.

Zou noted that Wukong AI combines both ground and space models in an intelligent question-answering system, with the ground model offering in-depth analysis and the orbiting one solving critical and complex challenges.

Secondo The Register l’AI potrebbe girare su un sistema basato su processori “domestici” inviato sulla Tiangong nel novembre scorso.

L’articolo che riportava la notizia:

Chinese chip designer Loongson last Friday announced its processors are powering a cloud computing platform that has been launched into space.

The silicon slinger announced that its tech was built into a payload called Star Eye that launched on November 15 aboard the Tianzhou-8 cargo mission to the Tiangong Space Station..

Loongson didn’t reveal which of its processors made it into space. It uses a proprietary instruction set architecture that is compatible with MIPS but includes elements of RISC-V, and offers products designed for use on the desktop, in servers and in industrial machinery.

2 Mi Piace

Nel 2021 un team NASA ha creato un software (ExoMiner), per validare i dati di 370 pianeti scoperti da Kepler; ora è stata sviluppata una nuova versione (ExoMiner++) addestrandola anche sui dati raccolti da TESS.

:newspaper: Il comunicato di NASA:

The new algorithm, which is discussed in a recent paper published in the Astronomical Journal, identified 7,000 targets as exoplanet candidates from TESS on an initial run. […] ExoMiner++ sifts through observations of possible transits to predict which ones are caused by exoplanets and which ones are caused by other astronomical events, such as eclipsing binary stars. “When you have hundreds of thousands of signals, like in this case, it’s the ideal place to deploy these deep learning technologies,” said Miguel Martinho, a KBR employee at NASA Ames who serves as the co-investigator for ExoMiner++.

:scroll: Il paper che descrive ExoMiner++:

We present ExoMiner++, an enhanced deep learning model that builds on the success of ExoMiner to improve transit signal classification in 2-minute TESS data. ExoMiner++ incorporates additional diagnostic inputs, including periodogram, flux trend, difference image, unfolded flux, and spacecraft attitude control data, all of which are crucial for effectively distinguishing transit signals from more challenging sources of false positives (FPs). […]

Among the 147,568 unlabeled TCEs (threshold crossing events, NdMe), ExoMiner++ identifies 7330 as planet candidates (PCs), with the remainder classified as FPs (false positives, NdMe). These 7330 PCs correspond to 1868 existing TESS Objects of Interest (TOIs), 69 Community TESS Objects of Interest (CTOIs), and 50 newly introduced CTOIs. 1797 out of the 2506 TOIs previously labeled as PCs in ExoFOP are classified as PCs by ExoMiner++. This reduction in plausible candidates, combined with the excellent ranking quality of ExoMiner++, allows the follow-up efforts to be focused on the most likely candidates, increasing the overall planet yield.

:scroll: Un secondo paper uscito pochi giorni fa su arXiv che descrive la versione 2.0 di ExoMiner++:

In this work, we apply ExoMiner++ 2.0, an adaptation of the ExoMiner++ framework originally developed for TESS 2-minute data, to FFI light curves. The model is used to perform large-scale planet versus non-planet classification of Threshold Crossing Events across the sectors analyzed in this study. We construct a uniform vetting catalog of all evaluated signals and assess model performance under different observing conditions. We find that ExoMiner++ 2.0 generalizes effectively to the FFI domain, providing robust discrimination between planetary signals, astrophysical false positives, and instrumental artifacts despite the limitations inherent to longer cadence data. This work extends the applicability of ExoMiner++ to the full TESS dataset and supports future population studies and follow-up prioritization.

:floppy_disk: Il software su GitHub:

3 Mi Piace

ESA ha impiegato una AI per ri-analizzare l’archivio di immagini prodotte da Hubble. Sono stati scoperti 1400 “oggetti anomali”, 800 dei quali mai documentati.

:megaphone: Il comunicato sul sito di ESA:

The team developed what’s called a neural network, an AI tool that uses computers to process data and search for patterns in a way that is inspired by the human brain. Their neural network, which they named AnomalyMatch, is trained to search for and recognise rare objects like jellyfish galaxies and gravitational arcs.

The team used AnomalyMatch to search through nearly 100 million image cutouts from the Hubble Legacy Archive, marking the first time the archive has been systematically searched for astrophysical anomalies. In just two and a half days, AnomalyMatch completed its search of the archive and returned a list of likely anomalies.

As the process of tracking down rare objects still requires an expert eye, David and Pablo personally inspected the sources rated by their algorithm as most likely to be anomalous. Of these, more than 1300 were true anomalies, more than 800 of which had never been documented in the scientific literature.

:camera: Alcune immagini:


A collage of six images, showing different kinds of “anomalous” astrophysical objects. These are galaxies with unusual shapes, among them a ring-shaped galaxy, a bipolar galaxy, a group of merging galaxies, and three galaxies with warped arcs created by gravitational lensing.

:scroll: Il paper:

We have systematically searched approximately 100 million image cutouts from the entire Hubble Legacy Archive using the recently developed AnomalyMatch method, which combines semi-supervised and active learning techniques for the efficient detection of astrophysical anomalies. This comprehensive search rapidly uncovered a multitude of astrophysical anomalies presented here that significantly expand the inventory of known rare objects.

Among our discoveries are 86 new candidate gravitational lenses, 18 jellyfish galaxies, and 417 mergers or interacting galaxies. The efficiency and accuracy of our iterative detection strategy allows us to trawl the complete archive within just 2–3 days, highlighting its potential for large-scale astronomical surveys.


:satellite_antenna: Ping: Hubble: aggiornamenti sulla missione

4 Mi Piace

È stata usata l’AI anche per pianificare un paio di tratte fatte percorrere a Perseverance.

7 Mi Piace

Nvidia ha sviluppato nuovo hardware ottimizzato per operare nello spazio.

Un articolo da Payload (forse con un titolo un po’ clickbai):

Nvidia unveiled a new piece of tech yesterday—the Space-1 Vera Rubin Module—aimed at providing the computing power necessary to fuel future in-space computing applications.

As the name suggests, the Space-1 Module is purpose-built to perform in the harsh, low-SWaP environment of space—but it’s not scaled back in its performance. Compared with the company’s H100 GPU, it’s designed to deliver up to 25x more AI-compute—enough to enable orbital data centers, according to the company.

Il comunicato stampa di Nvidia.

NVIDIA Space-1 Vera Rubin Module delivers data-center-class AI at scale, enabling large language models and advanced foundation models to operate directly in space. Its tightly integrated CPU-GPU architecture and high-bandwidth interconnect provide the performance and memory needed to process massive data streams from space-based instruments in real time. By bringing hyperscale AI capability into orbital platforms, Space-1 Vera Rubin Module unlocks on-orbit analytics, autonomous scientific discovery and rapid insight generation.

Interessante il passaggio che cita alcuni dei clienti che stanno già utilizzando - o intendono farlo - tecnologia AI di Nvidia in orbita. I link rimandano ad altrettanti articoli con i casi d’uso.

Industry leaders Aetherflux, Axiom Space, Kepler Communications, Planet, Sophia Space and Starcloud are using NVIDIA accelerated computing platforms to power next-generation space missions across orbital and ground environments.

2 Mi Piace

Una AI - già impiegata per il JWST - per migliorare l’analisi dei dati raccolti dal Vera Rubin.

:newspaper: L’articolo da space.com.

AI image processing has sped up analysis of data from NASA’s James Webb Space Telescope from years to mere days or less, ushering in an avalanche of ground-breaking discoveries that may otherwise never have been made. And now, the technology will be used to enhance the quality of images taken by the Chile-based Vera C. Rubin Observatory, the newest astronomy power house, to make them appear as sharp as if they have been taken from space.

Rubin’s observations suffer from significant distortions, as light from distant celestial objects must pass through Earth’s atmosphere before it hits the telescope’s detectors. A new AI algorithm developed by researchers from the University of California, Santa Cruz (UCSC) will now attempt to remove this distortion and increase the resolution of the images to make them look as if they have been taken from space. […]

The results were impressive. The researchers said in a paper that the Neo model "improves the accuracy of measured morphological parameters by factors of 2-10. In practice, that means an increased resolution that reveals a vast quantity of individual stars and precise shapes of galaxies where before one would find only vague smudges.

:scroll: Il paper su arXiv.

Here, we develop a conditional generative adversarial network, called Neo, trained to transform existing ground-based images into sharper, finer-scale images comparable to space-based image quality. We demonstrate that Neo improves the accuracy of measured morphological parameters by factors of - when trained to translate Subaru Hyper Suprime-Camera (HSC) images to approximate Hubble Space Telescope (HST) data. Neo is designed for applicability to ongoing, large-scale surveys such as the Legacy Survey of Space and Time (LSST) conducted by Vera C. Rubin Observatory in combination with space telescopes such as HST, James Webb Space Telescope, and Nancy Grace Roman Space Telescope. These results suggest that Neo could be used to improve both cosmological and galaxy evolution analyses based on massive, ground-based survey datasets like LSST.

:satellite_antenna: Ping: Vera C. Rubin Observatory - LSST

1 Mi Piace