⬤ Tencent just dropped HY-World 1.5 WorldPlay, a major upgrade to its generative AI toolkit that focuses on interactive world modeling. The model is now live on Hugging Face and marks a significant milestone as the first open-source, real-time interactive world model. HY-World 1.5 WorldPlay can generate geometrically consistent 3D worlds at 24 frames per second using either text prompts or image inputs.
⬤ What sets this model apart is its real-time performance paired with spatial consistency. Running at 24 FPS means users get continuous interaction instead of static snapshots, which is a big leap from earlier world-generation tools. Plus, it works with both text-to-world and image-to-world generation, giving creators flexibility to build and tweak environments from different starting points.
⬤ The release taps into growing interest around world models—AI systems that can simulate environments which evolve over time while staying internally consistent. By open-sourcing HY-World 1.5 WorldPlay on Hugging Face, Tencent is giving researchers and developers hands-on access to explore real-time 3D world generation for simulation, interactive media, and embodied AI research. The real-time interaction capability puts this model at the forefront of AI systems that can maintain persistent, navigable environments.
⬤ For TCEHY, launching an open-source real-time world model strengthens developer engagement and positions Tencent competitively in virtual environments, gaming tech, and simulation tools. As the race heats up around world models and interactive AI, real-time 3D generation with geometric consistency is setting the bar for what's next in generative AI development.
Eseandre Mordi
Eseandre Mordi