Good morning. It's Saturday, March 21st, 2026. Here's what's happening at the intersection of science, space, and technology.
**Artemis II is back on the pad.** After weeks of repairs, setbacks, and a weather delay that pushed the rollout back by several days, NASA's SLS rocket carrying the Orion capsule rolled out of the Vehicle Assembly Building at Kennedy Space Center just after midnight Friday โ arriving at Launch Pad 39B on Friday morning, March 20th. This is the second rollout to the pad, following the helium anomaly that forced the vehicle back to the VAB in February. The target launch window for the first crewed lunar flyby since Apollo 17 is April 1st and beyond. Four astronauts are finally getting closer to their ride around the Moon.
**SpaceX is counting down to a full 33-engine static fire of Starship's V3 booster.** Booster 19 โ the first full Raptor 3-equipped Super Heavy โ conducted a 10-engine test on the new Pad 2 at Starbase earlier this week, but the test ended early due to a ground-side issue, not a vehicle problem. SpaceX is now pushing toward the full 33-engine static fire, which would be a landmark test: first V3 booster, first Raptor 3 engines, first use of Launch Complex B, all at once. Flight 12 was originally targeting March, but with the end of the month rapidly approaching, April looks more likely. Once that static fire clears, the countdown to the first V3 flight begins in earnest.
**LangChain just open-sourced a full coding agent framework called Open-SWE.** Announced in the last 24 hours, it's an asynchronous software engineering agent built on LangGraph โ designed to handle long-horizon coding tasks without blocking. The project is fully open-source on GitHub, meaning developers can inspect, modify, and extend the architecture. This follows LangChain's recent "harness engineering" push and their Deep Agents framework โ the idea that the scaffolding around the model matters as much as the model itself. Open-SWE is a direct shot across the bow at proprietary coding agents from Anthropic, OpenAI, and GitHub. Worth watching for anyone building agentic development pipelines.
**Finally, MIT researchers have demonstrated something straight out of science fiction: using generative AI to reconstruct hidden 3D objects from reflected Wi-Fi signals.** Prior wireless vision systems could detect motion or rough shapes, but struggled with precision. The new approach uses generative AI to fill in the gaps โ essentially letting the model hallucinate physically plausible completions to incomplete signal data, then constraining those hallucinations against measured reflections. The result is detailed 3D reconstructions of objects occluded behind walls or obstacles. Applications range from search and rescue to robotics navigation โ anywhere you need to see without line of sight. It's early-stage research, but the approach of using generative models to super-resolve noisy sensor data is a pattern we're going to see a lot more of.
That's your Saturday morning briefing. Artemis is on the pad, Starship is lighting more engines, the open-source coding agent race is heating up, and your Wi-Fi router may soon know more about you than you'd like. Have a good weekend โ and as always, stay curious.