By continuing to browse our site you agree to our use of cookies, revised Privacy Policy and Terms of Use. You can change your cookie settings through your browser.
China's Beijing Institute for General Artificial Intelligence (BIGAI) released a new humanoid robot motion framework, OmniXtreme, enabling robots to perform a range of highly dynamic movements, including backflips, Thomas flairs and martial arts kicks.
via BIGAI
via BIGAI
The framework allows humanoid robots to execute dozens of complex motions with a high success rate in real-world deployments, effectively enabling "one algorithm to control multiple movements," and significantly improving efficiency in teaching robots advanced physical skills.
via BIGAI
via BIGAI
Achieving highly coordinated, dynamic actions has long been a major challenge in robot control. In recent years, reinforcement learning has been widely used, allowing robots to learn complex movements through extensive simulation training. However, as the number and complexity of movements increase, maintaining control accuracy becomes more difficult.
Unlike traditional reinforcement learning methods that train a single policy from scratch, OmniXtreme adopts a two-stage learning framework. The system achieved success rates of over 90% across multiple high-dynamic tasks on humanoid robots, according to BIGAI.
via BIGAI
via BIGAI
The approach addresses the challenge of balancing motion fidelity and scalability, and researchers say it could serve as a next-generation framework for generalized humanoid robot motion, laying the groundwork for robots to acquire more complex skills in the future.
via BIGAI
China's Beijing Institute for General Artificial Intelligence (BIGAI) released a new humanoid robot motion framework, OmniXtreme, enabling robots to perform a range of highly dynamic movements, including backflips, Thomas flairs and martial arts kicks.
via BIGAI
The framework allows humanoid robots to execute dozens of complex motions with a high success rate in real-world deployments, effectively enabling "one algorithm to control multiple movements," and significantly improving efficiency in teaching robots advanced physical skills.
via BIGAI
Achieving highly coordinated, dynamic actions has long been a major challenge in robot control. In recent years, reinforcement learning has been widely used, allowing robots to learn complex movements through extensive simulation training. However, as the number and complexity of movements increase, maintaining control accuracy becomes more difficult.
Unlike traditional reinforcement learning methods that train a single policy from scratch, OmniXtreme adopts a two-stage learning framework. The system achieved success rates of over 90% across multiple high-dynamic tasks on humanoid robots, according to BIGAI.
via BIGAI
The approach addresses the challenge of balancing motion fidelity and scalability, and researchers say it could serve as a next-generation framework for generalized humanoid robot motion, laying the groundwork for robots to acquire more complex skills in the future.