Analyzer Workflow
 
 
 

Use Action's Analyzer to compute the path of live-action camera and object motion in 3D space. Using the calculated position and motion of the virtual camera, you can match image sequences perfectly, placing any element in the scene. The perspective of the element you place in the scene changes with the perspective of the background as the camera moves. The virtual camera motion is intended to be identical to the motion of the actual camera that shot the scene.

Use the following workflow table as a quick start guide to the Analyzer. Follow the links for more detailed information.

 

Step 1

Select media (mono or stereo) to analyze and add an Analyzer node.

Adding an Analyzer Node

 

Step 2

Perform Camera Tracking in the Analyzer menu.

Optional steps:

  • Perform a Lens Correction.
  • Add mask constraints to moving areas or areas not wanted in the analysis.
  • Add properties of the camera that shot the footage to be analyzed.

Camera Tracking

 

Step 3

Fine-Tune and recalibrate or refine the cametra tracking analysis. This step is optional depending on the results of your initial analysis.

Fine Tuning the Analysis

 

Step 4

Create a Point Cloud of selected points after the analysis.

Converting the Camera Analyzer Results

 

Step 5

Perform Object Tracking. If needed, after camera tracking, you can track moving objects in the scene — such as inside the masks that were not tracked in the camera tracking analysis.

Object Tracking