
Why AI coding agents aren’t production-ready: Brittle context windows, broken refactors, missing operational awareness
Remember this Quora comment (which also became a meme)?

Bigger isn't always better: Examining the business case for multi-million token LLMs
The race to expand large language models (LLMs) beyond the million-token threshold has ignited a fierce debate in the AI community. Models like MiniMax-Text-01 boast 4-million-token capacity, and Gemini 1.5 Pro can process up to 2 million tokens simultaneously. They now promise game-changing applications and can analyze entire codebases, legal contracts or research papers in a single inference call.