Good morning. It's Friday, March 13th, 2026.
**NVIDIA drops Nemotron 3 Super β a new open model built for the agentic era.** Just released today, Nemotron 3 Super is a 120-billion-parameter mixture-of-experts model with only 12 billion parameters active at a time, designed specifically for multi-agent AI workflows. NVIDIA claims five times higher throughput than comparable models, and it packs a one-million-token context window to help agents maintain full workflow state across long tasks. That's a real problem they're addressing β multi-agent workloads generate up to 15 times more tokens than standard chat, so "goal drift" becomes a serious issue when agents lose track of what they were actually supposed to do. The model is already being integrated by Perplexity, CodeRabbit, Factory, and Greptile for code review and AI agent orchestration. Enterprise deployments are spinning up at Palantir, Cadence, Siemens, and Dassault SystΓ¨mes. This lands just three days before NVIDIA's GTC 2026 keynote β timing that is clearly not accidental.
**Speaking of GTC β Jensen Huang takes the stage Monday at eleven AM Pacific.** NVIDIA's annual conference kicks off this weekend in San Jose, and this year the expectations are unusually high. Jensen teased "a chip that will surprise the world" in a YouTube clip earlier this week. The confirmed agenda includes Vera Rubin, NVIDIA's next-generation GPU architecture built for HBM4 and designed to roughly triple Blackwell's inference throughput. There's also strong expectation that Feynman β the architecture after Rubin β will get a preview. And the recently acquired Groq LPU integration is expected to be explained in detail: Groq fills a gap in NVIDIA's portfolio for high-interactivity, single-token-at-a-time workloads that the NVL72 rack handles inefficiently at agent scale. If you care about where AI hardware is going, Monday's keynote is must-watch territory.
**Starship Flight 12 is creeping closer β Booster 19 nails a faster propellant load test.** SpaceX ran a full propellant loading test on the V3 Super Heavy booster at Starbase on Tuesday, and observers clocked it at around 30 minutes to fill 3,650 tonnes of liquid oxygen and methane. That's about five minutes faster than the V1 and V2 generations β a meaningful improvement for the turnaround times SpaceX will need if Starship is ever going to operate on anything like a regular schedule. The static fire is up next, reportedly using only 10 of Booster 19's 33 Raptor 3 engines for the initial test. Full stacking with Ship 39 β the first fully heat-shielded V3 upper stage β is expected to follow. Flight 12 is still officially targeting late March, though SpaceNews has been reporting an April slip. Either way, this is the first Starship launch attempt since October 2025.
**The Anthropic versus Pentagon fight just moved into federal court.** Anthropic filed an emergency stay request with the U.S. Court of Appeals for the D.C. Circuit on Wednesday, arguing that the Pentagon's supply chain risk designation is causing "irreparable harm" to the company. This comes after Defense Secretary Hegseth blacklisted Claude products following Anthropic's refusal to drop safety guardrails β a contract that OpenAI then stepped in to claim. Meanwhile, Palantir CEO Alex Karp confirmed his company is still using Anthropic's Claude despite the blacklist, saying the designation doesn't bind commercial customers. Microsoft, Google, Amazon, Apple, and OpenAI all filed amicus briefs backing Anthropic in the case. No ruling yet. This one has implications that go well beyond one contract β it's essentially a test of whether the federal government can compel AI companies to remove safety controls as a condition of doing business.
That's your Friday morning briefing. GTC keynote Monday, Starship static fire window open, and a federal court case that could define the ground rules for AI in government for years to come. Stay curious.