{"id":661,"date":"2025-11-13T16:50:34","date_gmt":"2025-11-13T08:50:34","guid":{"rendered":"https:\/\/hightorque.cn\/en\/?p=661"},"modified":"2025-11-13T16:55:40","modified_gmt":"2025-11-13T08:55:40","slug":"training-review-mini-pi-mimic-baseline-training-open-source-dataset-retargeting-imitation-learning-training-and-sim2real","status":"publish","type":"post","link":"https:\/\/hightorque.cn\/en\/training-review-mini-pi-mimic-baseline-training-open-source-dataset-retargeting-imitation-learning-training-and-sim2real","title":{"rendered":"Training Review | Mini Pi+ Mimic Baseline Training: Open-Source Dataset &#038; Retargeting, Imitation Learning Training, and Sim2Real"},"content":{"rendered":"<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\"><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-2535 aligncenter\" src=\"https:\/\/www.hightorque.cn\/wp-content\/uploads\/2025\/11\/\u6297\u6270.gif\" alt=\"\" width=\"426\" height=\"240\" \/><\/section>\n<\/section>\n<section data-exeditor-arbitrary-box=\"wrap\">\n<p style=\"text-align: center;\"><span style=\"color: #999999;\">Demonstration of Mimic Baseline Humanoid Motion Control Effect<\/span><\/p>\n<\/section>\n<p>&nbsp;<\/p>\n<section data-exeditor-arbitrary-box=\"wrap\"><strong>\u25ba Event Review<\/strong><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<p data-exeditor-arbitrary-box=\"wrap\">On October 30, 2025, the \"Mini Pi+ Mimic Baseline Training\" hosted by High Torque was successfully held, attracting nearly 100 participants including university research teams, robot enthusiasts, and industry developers.<\/p>\n<p>This training was delivered by technical operation engineers from High Torque. The content covered open-source datasets, the principles and practical operations of motion retargeting, the Mimic training framework, Sim2Real deployment, and sharing of engineering experience. Practical demonstrations were also conducted on the real High Torque Mini Pi+ robot.<\/p>\n<p>The training aimed to help developers gain an in-depth understanding of the application of imitation learning in the whole-body motion control of humanoid robots, master the complete implementation path from data to real robots, and get started with humanoid robot development with low thresholds and high efficiency.<\/p>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><strong>\u25ba Highlights Introduction<\/strong><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\" data-exeditor-arbitrary-box-special-style=\"width\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\"><strong>Highlight 1: Provision of a Fully Reproducible Mimic Baseline Framework<\/strong><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<\/section>\n<\/section>\n<\/section>\n<\/section>\n<\/section>\n<\/section>\n<\/section>\n<p data-exeditor-arbitrary-box=\"wrap\">The training fully presented the implementation process of the Mimic Baseline, covering key links such as datasets, model training, and real-robot verification. Through a combination of theory and practical operation, it helped developers gain an in-depth understanding of the core principles of imitation learning and the implementation path of key technologies. Relying on High Torque\u2019s self-developed humanoid robot platform, developers can quickly reproduce mainstream algorithm frameworks and support flexible secondary development and real-machine verification.<\/p>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\" data-exeditor-arbitrary-box-special-style=\"width\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\"><strong>Highlight 2: Integration of Principles with Practical Engineering Experience<\/strong><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<p data-exeditor-arbitrary-box=\"wrap\">Take the GMR retargeting segment as an example: the training systematically explained robot coordinate systems and rotation transformations, and conducted an in-depth analysis of the application of Euler angles and quaternions in posture representation. Combined with the GMR retargeting process, it demonstrated the complete implementation path\u2014from motion capture data format conversion to skeleton proportion adjustment and inverse kinematics solution\u2014and provided guidance on the optimized implementation of humanoid data retargeting from a principle-based perspective.<\/p>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<\/section>\n<\/section>\n<\/section>\n<\/section>\n<\/section>\n<\/section>\n<\/section>\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\" data-exeditor-arbitrary-box-special-style=\"width\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\"><strong>Highlight 3: High Torque Open-Source Datasets<\/strong><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<\/section>\n<\/section>\n<\/section>\n<\/section>\n<\/section>\n<\/section>\n<\/section>\n<section data-exeditor-arbitrary-box=\"wrap\">\n<p class=\"auto-hide-last-sibling-br paragraph-pP9ZLC paragraph-element br-paragraph-space\">During the training, we used the Lafan dataset for motion retargeting, aiming to solve common issues faced by developers\u2014such as mesh penetration and joint alignment between human actions and robot movements\u2014while providing high-quality datasets that are ready for direct use and have minimal coordinate axis drift. In the future, we will synchronize the Amass dataset and other motion capture datasets to the official GitHub repository for developers' reference.<\/p>\n<p>In terms of motion data processing, the training explained how to adjust parameters (including proportions, offsets, and quaternions) for the Pi+ robot's skeleton by combining inverse kinematics (IK) retargeting algorithm files. It also detailed the effects of modifying each parameter, striving to achieve high-precision matching in human-robot motion retargeting.<\/p>\n<div><\/div>\n<\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><strong>\u25ba Training Playback Link<\/strong><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<p data-exeditor-arbitrary-box=\"wrap\">\u3010Mimic Baseline Online Training Playback: Dataset, Retargeting, Training and Sim2Real - Bilibili\u3011<br \/>\nhttps:\/\/b23.tv\/ErLpw8I<\/p>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><strong>\u25ba On-Site Q&amp;A and Key Issues<\/strong><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<p data-exeditor-arbitrary-box=\"wrap\">In robot simulation training, PD parameter matching, command definition, and motion capture dataset processing are common high-frequency pain points encountered by developers. Below is a summary and explanation of several typical issues discussed during the training:<\/p>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\" data-exeditor-arbitrary-box-special-style=\"width\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section><\/section>\n<section><strong>Q1\uff1aIf PD parameters are not found during training, then how should they be obtained?<\/strong><\/section>\n<section><\/section>\n<section><\/section>\n<\/section>\n<\/section>\n<\/section>\n<\/section>\n<\/section>\n<\/section>\n<\/section>\n<\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<p data-exeditor-arbitrary-box=\"wrap\">The key lies in parameter identification:<\/p>\n<p>1.Send periodic position signals (e.g., trigonometric function signals) to each joint motor to make it track the target position.<\/p>\n<p>2.Focus on high-load joints (such as ankle joints) \u2014 these joints bear the full-body torque and impact, and are crucial for verifying the tracking effect.<\/p>\n<p>3.Backfill the measured PD values into the simulation model, which can significantly narrow the performance gap between the simulated and real systems. While the robustness of reinforcement learning can offset parameter deviations to a certain extent, real PD values can double the training efficiency and convergence effect.<\/p>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\" data-exeditor-arbitrary-box-special-style=\"width\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><strong>Q2\uff1aWhat does \"command=44\" in the source code mean? And what is the difference between this command and the commands used in traditional Reinforcement Learning (RL)?<\/strong><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<\/section>\n<\/section>\n<\/section>\n<\/section>\n<\/section>\n<\/section>\n<\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<p data-exeditor-arbitrary-box=\"wrap\">Here, the \"command\" is not a speed-based adversarial command used in traditional Reinforcement Learning (RL) or AMP frameworks, but an imitation learning input based on reference motion data:<\/p>\n<p>1.The robot has a total of 22 degrees of freedom (DoF), and each degree of freedom includes 2 dimensions: \"angle + velocity\".<\/p>\n<p>2.Therefore: 22 \u00d7 2 = 44-dimensional reference motion features.<\/p>\n<p>The core of this design is to enable the robot to learn the imitation of \"posture and dynamics\", rather than simply competing against speed targets.<\/p>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\" data-exeditor-arbitrary-box-special-style=\"width\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\">\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><strong>Q3\uff1aWhat should be done if the coordinate axes of the dataset collected by an IMU-based motion capture device are not standardized?<\/strong><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<\/section>\n<\/section>\n<\/section>\n<\/section>\n<\/section>\n<\/section>\n<\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<p data-exeditor-arbitrary-box=\"wrap\">According to the accuracy requirements, two processing methods can be divided as follows:<\/p>\n<section><\/section>\n<section>1.For high-accuracy requirements (scientific research\/engineering verification)<\/section>\n<section>1\uff09Check the initial posture of the dataset and define a \"standard posture\".<\/section>\n<section>2\uff09Refer to the joint coordinate specifications of standard datasets such as Lafan.<\/section>\n<section>3\uff09Perform posture correction on the motion capture data or adjust the robot's joint directions to align with the standard coordinate system.<\/section>\n<p>2.For general requirements (demonstration\/preliminary verification)<\/p>\n<p data-exeditor-arbitrary-box=\"wrap\">It is only necessary to directly ensure that the robot's movements are roughly consistent with the dataset's posture to the naked eye, without complex correction. A unified coordinate specification can significantly improve data reusability and model migration stability.<\/p>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><strong>\u25ba Next Steps<\/strong><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\"><\/section>\n<section data-exeditor-arbitrary-box=\"wrap\">\n<p class=\"auto-hide-last-sibling-br paragraph-pP9ZLC paragraph-element br-paragraph-space\">High Torque has opened its Sim2Real experimental spaces (located in Beijing and Guangzhou), inviting more developers to experience the entire process of robots from simulation to physical machines with their own hands. Welcome to add our WeChat contact to reserve offline communication and experience sessions! (WeChat ID: dionysuslearning)<\/p>\n<p>At the same time, we are also fully opening internship positions\u2014recruitment is ongoing for roles in mechanical engineering, motion control, product development, operations, and other areas! We sincerely welcome students who are passionate about robot R&amp;D and innovation to join us and jointly explore the future of humanoid robots.<\/p>\n<\/section>\n<p>&nbsp;<\/p>\n<section data-exeditor-arbitrary-box=\"wrap\">\n<p style=\"text-align: center;\">We are committed to enabling every developer<\/p>\n<p style=\"text-align: center;\">to have their own humanoid robot.<\/p>\n<\/section>\n","protected":false},"excerpt":{"rendered":"<p>Demonstration of Mimic Baseline Humanoid Motion Control [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":662,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[16],"tags":[],"_links":{"self":[{"href":"https:\/\/hightorque.cn\/en\/wp-json\/wp\/v2\/posts\/661"}],"collection":[{"href":"https:\/\/hightorque.cn\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hightorque.cn\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hightorque.cn\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/hightorque.cn\/en\/wp-json\/wp\/v2\/comments?post=661"}],"version-history":[{"count":0,"href":"https:\/\/hightorque.cn\/en\/wp-json\/wp\/v2\/posts\/661\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/hightorque.cn\/en\/wp-json\/wp\/v2\/media\/662"}],"wp:attachment":[{"href":"https:\/\/hightorque.cn\/en\/wp-json\/wp\/v2\/media?parent=661"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hightorque.cn\/en\/wp-json\/wp\/v2\/categories?post=661"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hightorque.cn\/en\/wp-json\/wp\/v2\/tags?post=661"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}