Welcome to the agent platform research briefing for Friday, May 1st, 2026. Four new stories today.
OpenClaw shipped version 2026.4.29 on April 30th โ a quality release packed with agent workflow improvements. The headline change: active-run message steering is now the default, meaning you can interrupt an agent mid-response and have it course-correct at the next model boundary instead of queuing. The memory system grows into a people-aware wiki with provenance views, person cards, relationship graphs, and per-conversation Active Memory filters โ so agents can be selective about which chats they recall from. A new NVIDIA provider is added with API-key onboarding and model catalog integration. Bedrock Opus 4.7 now gets thinking-level parity. And a SQLite-backed plugin state store brings restart-safe keyed registries with TTL and eviction. Security-wise: OpenGrep scanning added, tool sections under restricted profiles no longer implicitly widen access, and GHSA triage policy is sharper. GLaDOS is on 2026.4.22 โ now seven versions behind.
Anthropic launched Claude Security in public beta on April 30th, moving it out of the closed research preview that started back in February with Claude Code Security. Powered by Opus 4.7, the tool scans entire codebases for vulnerabilities by tracing data flows and examining component interactions โ not just looking for known patterns. Hundreds of organizations have already used it during the preview to find exploits that existing tools had missed for years. Major security partners including CrowdStrike, Palo Alto Networks, SentinelOne, Trend Micro, and Wiz are integrating Opus 4.7 into their platforms. The update adds scheduled scans, dismissible findings with documented reasoning, and CSV/Markdown exports for audit systems. It's available now to Claude Enterprise customers, coming soon to Team and Max tiers. This is part of Anthropic's broader cybersecurity play alongside Project Glasswing and the Mythos model.
Google announced April 30th that Gemini is rolling out to replace Google Assistant in cars with Google built-in โ covering approximately 4 million GM vehicles from model year 2022 and newer across Buick, Cadillac, Chevrolet, and GMC. But the announcement didn't limit it to GM, suggesting broader expansion. Drivers can now have natural, multi-turn conversations โ like asking for a sit-down restaurant with outdoor seating along their route, then following up about parking or dietary options. Gemini pulls from Google Maps in real time. Gemini Live, currently in beta, enables open-ended real-time conversations triggered by saying "Hey Google, let's talk." The rollout starts in the US with English, with more languages coming later in 2026. Future updates are expected to integrate Gmail, Calendar, and Google Home into the driving experience.
A new open-source project called speech-swift hit GitHub this week โ a comprehensive AI speech toolkit built specifically for Apple Silicon, powered by MLX and CoreML. It runs entirely locally on Mac and iOS with no cloud dependency. The toolkit includes Qwen3-ASR for speech recognition across 52 languages, streaming dictation, multiple TTS options including Kokoro on the Neural Engine at 54 voices, PersonaPlex full-duplex speech-to-speech with 18 voice presets, and an OpenAI Realtime API-compatible WebSocket server out of the box. It also bundles noise suppression, music source separation, wake-word detection, and even an on-device chat LLM. For voice AI developers, this is the most complete local-first speech stack we've seen for Apple Silicon โ bridging the gap between cloud-dependent voice APIs and fully offline experiences.
That's the briefing for today.