MOVIN Studio setup takes two simple steps: connect MOVIN TRACIN to your PC through an Ethernet connection, then define the mocap zone. A visual interface helps achieve accurate ground leveling using point clouds. Points close to the reference line appear in green, while those above or too far away appear in red, indicating misalignment.
MOVIN Studio completes calibration in just 3 seconds using AI. Stand at the center of the mocap zone and hold an A-pose without moving. If the performer changes, click Calibrate Actor to create a new profile. Profiles can be renamed or deleted, and selecting the correct performer profile helps achieve high-quality mocap data.
A set of built-in tools helps you monitor and refine motion capture. Use features such as Foot Contact, Skeleton View, Mirror Mode, and In-place Mode in real time. You can also navigate the scene with the mouse to rotate, pan, and zoom from any angle.
Click Review Clips to view recorded motion clips. You can adjust playback speed, trim clips, and export them in .bvh or .fbx formats for use in Blender, Unity, Unreal Engine, and MotionBuilder.
Use Live Streaming to send motion data from MOVIN Studio to other platforms in real time. It supports tools such as Warudo, VRChat, Unity, Unreal Engine, Blender, Notch, TouchDesigner, and ROS, and can stream to any platform that supports the OSC protocol.
Retargeting allows motion captured with MOVINMan to be applied naturally to custom characters. The process consists of four steps: Pose Matching to align the rest pose, Bone Mapping to match the character’s bones with MOVINMan, Retarget Settings to adjust how the retargeting algorithm transfers motion to the character, and Bone Offset to adjust individual bones when character proportions differ from MOVINMan.
To support a wide range of use cases, MOVIN Studio also provides additional features. Puppet Mode allows a single user to prepare all settings in advance and operate motion capture independently. Third-party glove support enables accurate hand capture that integrates seamlessly with full-body motion. Facial motion capture can also be recorded separately using a smartphone and combined with full-body motion data.