⬤ GLM's cooking up plans to drop a 30-billion-parameter model in 2025, which is a pretty smart pivot from their current 355-billion-parameter beast. Chinese AI companies are going full throttle in the race for market share, and GLM's latest move shows they're serious about offering different model sizes while dealing with the real-world headaches of working with super-massive systems.
⬤ Here's the thing—GLM researchers are finding it harder to run experiments on their 355-billion-parameter model. They're basically saying they need to test ideas on smaller setups first, like 9-billion or 30-billion-parameter models, before going big. The team's also admitting that their current GLM-4.6 architecture is hitting a wall, which means they might need to rethink the whole framework for what comes next.
Large-scale experimentation is increasingly difficult with a 355-billion-parameter model. We need to run scientific tests on smaller architectures to validate hypotheses before scaling.
⬤ This fits right into what's happening across Chinese AI development—everyone's rushing to build out multiple model tiers for different budgets and business needs. By throwing a 30-billion-parameter option into the mix, GLM can hit way more use cases, from big corporate workloads to research labs that need to move fast without burning through their budget.
⬤ GLM's shift shows where AI competition is headed now. It's not just about having one massive system anymore—you need a whole ecosystem of models that work together. Rolling out this mid-sized model could get more people on board, speed up testing pipelines, and lock down GLM's spot in China's exploding AI scene.
Peter Smith
Peter Smith