Google is starting a new research moonshot called Project Suncatcher to “one day scale machine learning in space.”
This would involve Google Tensor Processing Unit (TPU) AI chips being placed onboard an interconnected network of satellites to “harness the full power of the Sun.”
Specifically, a “solar panel can be up to 8 times more productive than on earth” for near-continuous power using a “dawn–dusk sun-synchronous low earth orbit” that reduces the need for batteries and other power generation.
In the future, space may be the best place to scale AI compute.
These satellites would connect via free-space optical links, with large-scale ML workloads “distributing tasks across numerous accelerators with high-bandwidth, low-latency connections.” To match data centers on Earth, the connection between satellites would have to be tens of terabits per second, and they’d have to fly in “very close formation (kilometers or less).”
…with satellites positioned just hundreds of meters apart, we will likely only require modest station-keeping maneuvers to maintain stable constellations within our desired sun-synchronous orbit.
Google has already conducted radiation testing on TPUs (Trillium, v6e), with “promising” results:
While the High Bandwidth Memory (HBM) subsystems were the most sensitive component, they only began showing irregularities after a cumulative dose of 2 krad(Si) — nearly three times the expected (shielded) five year mission dose of 750 rad(Si). No hard failures were attributable to TID up to the maximum tested dose of 15 krad(Si) on a single chip, indicating that Trillium TPUs are surprisingly radiation-hard for space applications.
Finally, Google believes that launch costs will “fall to less than $200/kg by the mid-2030s.” At that point, the “cost of launching and operating a space-based data center could become roughly comparable to the reported energy costs of an equivalent terrestrial data center on a per-kilowatt/year basis.”
Our initial analysis shows that the core concepts of space-based ML compute are not precluded by fundamental physics or insurmountable economic barriers.
Google still has to work through engineering challenges like thermal management, high-bandwidth ground communications, and on-orbit system reliability. It’s partnering with Planet to launch two prototype satellites by early 2027 that test how “models and TPU hardware operate in space and validate the use of optical inter-satellite links for distributed ML tasks.”
More details are available in “Towards a future space-based, highly scalable AI infrastructure system design.”
FTC: We use income earning auto affiliate links. More.


Comments