4 Comments
User's avatar
8Lee's avatar

OpenClaw (and their ilk) is the last thing I'm excited about tracking; it feels a bit like the crypto / NFT bubble where madness in disguise of genius forced tons of people to walk out of their basements and buy a mac mini for the first time in their lives. It's so funny that it's not.

The other stuff? Yeah. Excited.

Mitchell Kosowski's avatar

Great synthesis of where things stand! One trend I'd add to the list: the commoditization of the model layer itself.

You touch on it implicitly throughout (RLVR spreading, open-weight models closing the gap, reasoning becoming table stakes) but I think the takeaway deserves to be stated more directly: by end of 2026, the model won't be the moat. The race is already shifting to distribution, data access, and workflow integration.

Jason Crittenden's avatar

I think the way you describe it makes it harder to see. But you're describing the orchestration layer. The layer that decides what inference service to call, what tools to use, what memory archicture. These are all platform level decisions - and for the enterprise layer all the decisions are moving there.

Consumer models have less discriminating customers but I think OpenAIs behavior makes it clear even they don't have faith in that moat.

Pawel Jozefiak's avatar

The open-weight models section is where this gets real. Qwen3-Coder-Next running close to top closed models on consumer hardware is not a benchmark curiosity - it changes what's buildable without an API key.

Your point about persistent local agents connecting to files and apps while keeping data on-device describes something that's already possible today on Apple Silicon, not just a 2026 prediction.

The gap between 'runs locally' and 'actually useful for agentic tasks' closed faster than most expected. The inference format and compression improvements you mention are the unglamorous reason why.