Westlake Robotics, incubated by Westlake University, has unveiled the humanoid robot Titan o1, powered by the world’s first motion generalization large model.
Breakthrough Technology
- GAE “Embodied Avatar System”: A universal motion pre-training model that acts like a powerful “little cerebellum” for robots.
- Real-Time Human Mimicry: One operator can control hundreds of robots across different locations, synchronizing movements instantly.
- Motion Memory: Robots can replay demonstrated actions via backend commands, enabling complex group performances like the “Five-Animal Exercises” showcased at the Anhui TV Spring Festival Gala.
Demonstration Highlights
Titan o1’s futuristic orange-black-silver design was paired with motion capture. When a staff member waved, turned, or kicked a ball, Titan o1 replicated every detail—arm angles, turning range, stride length, and rhythm—with millisecond precision.

Ease of Use
No programming skills required. With motion capture gear or simple computer input, users can make robots execute corresponding actions—“what you think is what they do.”
Technical Edge
- Developed entirely by Westlake’s team, the algorithm is at least six months ahead of international peers.
- Capable of handling unseen motions and adapting across different robot structures and sizes (“cross-embodiment” ability).
- Compression and adaptation methods echo the way ChatGPT generalized language and Sora generalized vision—GAE generalizes motion.
Real-World Potential
Beyond “cyber avatars,” Titan o1 could replace humans in high-risk environments such as firefighting, mining, or high-altitude maintenance, opening new horizons for practical deployment.
This marks a milestone: just as language and vision models transformed AI, GAE brings motion generalization to robotics, setting the stage for scalable, safe, and versatile humanoid applications.





Must log in before commenting!
Sign In Sign Up