The default renderer (before 2014) in MotionBuilder requires manual shading setup for each model, as well as material and lighting tweaking to achieve a good visual quality. The performance of the default renderer is lower in geometry or shading intensive scenes. The performance is significantly reduced and rendering related information hardly flows through the pipeline when frequent interoperability with other applications are in demand.
A general rendering solution to meet the needs of most users is not feasible because many studios have their unique rendering characteristics and different data pipelines. These requirements cannot be addressed at the same time using a general rendering solution. Therefore, a plausible approach is to develop MotionBuilder into a high performance standard animation platform with superior real-time hardware device support, and open the SDK access for rendering and data pipeline aspects as much as possible. This approach enables the third-party developers to develop their own proprietary rendering solution, which better suits their unique production requirements.
The contents in the MotionBuilder viewport include three parts: scene, locator, and heads up display (HUD). While the custom renderer API is more focused on the scene shading, the plug-in developer can extend the locator and HUD with the other parts of the SDK. Also, the texture, material, geometry, model, light, and camera can be customized accordingly, if needed.
Real-time capabilities are paramount in importance, and this API is designed to maintain MotionBuilder’s excellent real-time performance, while allowing the plug-in developer to provide an unique and high quality rendered image with sufficient flexibility to support the various production pipeline and workflows.
The multiple concurrent pipeline and double data buffer mechanism for high performance evaluation and rendering is illustrated in the following figure.
Since the beginning, MotionBuilder uses state of the art concurrent pipelines to exploit the increasing processing power that is available in the multi-CPU machines. Also, adapting the multiple data buffer mechanism reduces the explicit need of lock and further improves the pipeline’s throughput as illustrated in the figure. In this architecture, the evaluation pipeline can evaluate the scene for the next frame in the evaluation data buffer, while the rendering pipeline can display the scene for the current frame using the data stored in the display buffer. On swapping data, the indexes of the two buffers are simply exchanged. After a short period of time for scene change event processing (for example, mouse, keyboard, or other user or system events), the next frame gets started with each pipeline working on alternative data buffers.
The tasks in the evaluation pipeline are further distributed to multiple threads after the directed acyclic graph (DAG) dependency sorting. The heavy geometry deformation tasks can be either executed in parallel on multiple CPUs in the evaluation pipeline after the depended transformation evaluation tasks, or collectively in the graphics processing unit (GPU) at the beginning of the rendering pipeline for better performance, if powerful CUDA™ enabled video card is available.
Currently, MotionBuilder utilizes only a single CPU core and a single GPU device for rendering by default. However, a plug-in developer can use the multi-GPU rendering approach without any known barriers. Considering the recent commoditization of the multi-GPU hardware solution provided by major video card vendors, the multi-GPU rendering might be an attractive solution for those who are pursuing an ultimate interactive performance and high visual fidelity.
The concurrent pipeline and multiple buffer architecture helps improve the runtime performance, but sometimes it can make the plug-in development less straight forward. The design decisions need to be made carefully by the plug-in developers to maintain real-time capacity without compromising on the data integrity in multiple pipeline buffers. A set of global callbacks are provided to allow the plug-in developers to gain access to the critical timing stages of various pipelines.
Following is a set of global callback timings on concurrent evaluation and rendering pipeline:
enum FBGlobalEvalCallbackTiming { /** Invoked before any DAG (Transformation and Deformation) evaluation tasks start * in the evaluation pipeline/thread. */ kFBGlobalEvalCallbackBeforeDAG, /** Invoked after all DAG (Transformation and Deformation) evaluation tasks are finished * in the evaluation pipeline/thread. */ kFBGlobalEvalCallbackAfterDAG, /** Invoked after all the deformation tasks are finished in the evaluation pipeline/thread. */ kFBGlobalEvalCallbackAfterDeform, /** Invoked when both the evaluation and rendering pipelines/threads are stopped. * Useful in certain complicated scene change tasks to avoid race condition. */ kFBGlobalEvalCallbackSyn, /** Invoked in the rendering pipeline, before any rendering tasks start (immediately * after clearing the GL back buffer). */ kFBGlobalEvalCallbackBeforeRender, /** Invoked in the rendering pipeline, after any rendering tasks finish (just * before swapping the GL back/front buffer). */ kFBGlobalEvalCallbackAfterRender };
Following are a set of functions and event properties to manage the registration of the global callback on a pipeline's critical timing:
void RegisterEvaluationGlobalFunction (kFBEvaluationGlobalFunctionCallback pCallback, FBGlobalEvalCallbackTiming pTiming); void UnregisterEvaluationGlobalFunction(kFBEvaluationGlobalFunctionCallback pCallback, FBGlobalEvalCallbackTiming pTiming); FBPropertyEventCallbackEvalPipeline OnEvaluationPipelineEvent; FBPropertyEventCallbackRenderPipeline OnRenderingPipelineEvent; FBPropertyEventCallbackSynPoint OnSynchronizationEvent;
It is important for the plug-in developers to understand the concurrent pipeline and multiple buffer mechanism as well as the global critical timing callback for not only the custom renderer, but also for any other plug-in development. This will enable the plug-in developers to get the most out of the MotionBuilder’s high performance runtime platform and flexible API design.
Most of the MotionBuilder SDK object classes are inherited from the FBPlug class, which provides a set of low level functions to manage the source (child) and destination (parent) connections between the plug instances. It also provides three important virtual functions: FBPlug::PlugNotify(), FBPlug::PlugDataNotify(), and FBPlug::PlugStateNotify(). These functions can be overridden in the subclasses to get callback when there are changes in the internal state of the custom type instances.
The following figure shows the plug based object model in MotionBuilder:
The most important subclasses are FBComponent and FBProperty. An instance of FBComponent owns a list of properties (FBComponent::PropertyList). The connection between the two object instances and the various value (or state) changing events on owned properties trigger callback events in the PlugNotify() (or PlugDataNotify() and PlugStateNotify()) functions. The FBProperty subclass can hold various type of raw data (FBPropertyBool, FBPropertyInt, FBPropertyColor, and others), and it also provides a uniform mechanism to organize the object’s attributes either with scalar or time varied (FBAnimatableProperty) values.
Most type of objects in the ORSDK (including model, locator, geometry, material, textures, and others) are inherited from the FBComponent class. The following figure shows the model, locator, geometry, material, and texture class hierarchy.
The ORSDK also provides a set of callback events (type of FBPropertyEvent and its subclasses) in some global singleton objects to allow the plug-in developers to write code that can listen to the following:
You can use the wildcard searching (FBPropertyEvent*) in ORSDK header files to collect the complete list of event properties. The following figure shows the frequently used event properties on singleton objects.