January Release Spotlight

Our monthly Product Spotlight highlights a few of our biggest releases from the past month.
Partition Sorting: prioritize fast providers
Set a minimum throughput or maximum latency threshold and OpenRouter will deprioritize providers that don't meet it, with no latency hit on your requests. Combine with partition: "none" across fallback models to find the cheapest option that still meets your performance floor.
Try It(opens in new tab) | Announcement(opens in new tab)
Provider Explorer
Explore all providers on OpenRouter in one place. DeepInfra has the most models, and OpenAI has the most proprietary ones.
Try It(opens in new tab) | Announcement(opens in new tab)
Bug & Feedback Reporting
Report bugs or feedback on any generation from the Chatroom, your Activity page(opens in new tab), or via API. We'll use this to help quantify provider degradation, and more to come.
Try It(opens in new tab) | Announcement(opens in new tab)
Auto Router customization
Auto Router now supports 58 models including Opus 4.5, works with tool calling, and lets you customize allowed models using wildcard syntax (e.g. anthropic/*). No markup over the routed model’s market price. Per-request API support included.
Try It(opens in new tab) | Announcement(opens in new tab)
SDK Skills Loader
Load encapsulated, composable skills into any model's context via the OpenRouter SDK. Skills inject domain-specific instructions automatically, with built-in idempotency so the same skill is never loaded twice.
Try It(opens in new tab) | Announcement(opens in new tab)
LLM Leaderboard over 50% faster
The LLM Leaderboard is now more than 50% faster using intersection observer-based lazy loading combined with code-splitting to cut total blocking time in half.
Announcement(opens in new tab)
70% faster gateway
Major p99 latency improvements across the gateway. Fastest gateways in our benchmarks.
Announcement(opens in new tab)
Are we missing something you want to see? Let us know on Discord.(opens in new tab)