Our Privacy Statement & Cookie Policy

By continuing to browse our site you agree to our use of cookies, revised Privacy Policy and Terms of Use. You can change your cookie settings through your browser.

I agree

China releases 60,000-minute vision-and-touch robotics dataset

CGTN

A cross-body vision-and-touch multimodal dataset named Baihu-VTouch was released on Tuesday, containing more than 60,000 minutes of robot interaction data, known as one of the largest open-source datasets of its kind, China Media Group reported.

Until recently, training data for embodied artificial intelligence has long been completed by visual inputs, leading robots to rely heavily on sight while lacking tactile perception. The lack of tactile data has limited robots' performance ability to operate in poor lighting or handle fragile objects.

In response to this, Baihu-VTouch records pressure and deformation data across a range of physical contact modes, covering real-world scenarios such as household services, industrial manufacturing, catering and specialized operations.

With data collected across multiple robot configurations, including wheeled and bipedal platforms, Baihu-VTouch includes more than 380 task types involving over 500 real-world objects, and is structured around more than 100 basic manipulation skills such as grasping, inserting, rotating and placing.

The dataset was released by the National and Local Co-built Humanoid Robot Innovation Center in collaboration with a technology firm, and is designed to support about 90 percent of daily and industrial manipulation tasks.

Training data is central to intelligent robots. For example, China opened in June 2025 its largest humanoid robot training facility, the Hubei Humanoid Robot Center, with hundreds of robots deployed across 23 different simulated settings, capable of collecting more than 10 million data points annually.

Currently, 6,000 minutes of the Baihu-VTouch dataset have been made available on the open-source robotics platform OpenLoong.

Search Trends