About the SDK Components

 
 
 

You can refer to the ORSDK example in <yourinstallationfolder>/OpenRealitySDK/samples/shaders/CustomRenderer to understand how the custom renderer is implemented. For information about building and deploying this sample plug-in, see the Quick Start section in the Custom Render API topic.

FBRendererCallback Interface

The plug-in developer needs to subclass the FBRendererCallback interface, and override and implement the following virtual functions.

class FBSDK_DLL FBRendererCallback : public FBComponent 
{
    …
    virtual const char*  GetCallbackName() const = 0;
    virtual const char*  GetCallbackDesc() const = 0;
    virtual unsigned int GetCallbackPrefCount() const;
    virtual const char*  GetCallbackPrefName(unsigned int pIndex) const;

    virtual void Render(FBRenderOptions* pRenderOptions);

    virtual void Attach();
    virtual void Detach();
    virtual void DetachDisplayContext(FBViewingOptions* pViewOption);

    … 
};

The first four virtual functions (FBRendererCallback::GetCallbackName(), FBRendererCallback::GetCallbackDesc(), FBRendererCallback::GetCallbackPrefCount(), and FBRendererCallback::GetCallbackPrefName()) must be overridden to provide the custom renderer’s name and description, and the count and name of the pre-settings respectively.

FBRendererCallback::Render()

The virtual Render() function is invoked once per frame/per view pane for the currently chosen custom renderer. Upon call, the associated OpenGL context is setup properly, for example, the output frame buffer (color, depth, stencil, and others), GL’s viewport, camera’s project and model-view matrix, and so on. Depending on the current camera’s setting, the target frame buffer might already contain the rendering result for the camera’s back plate and grid. It is the plug-in developer’s responsibility to fill up the output frame buffer with the desirable content (either directly render on the target frame buffer, or only built the final result). Mostly the color buffer output is sufficient, but in certain scenarios it is desirable to have a proper depth buffer output as well. After the Render() function returns, depending on the current display mode and camera setting, MotionBuilder might continue to render the other elements in viewport including the following:

The following figure shows the Models-Only display mode.

The following figure shows how the X-ray (and Normal) display mode draw extra elements in the viewport.

In the stereo display mode, the Render() function is called twice per frame/per view pane (once for the left and right eyes respectively), and composite both the buffers accordingly (if not using the active stereo mode).

Display Context Management

When the custom renderer is switched on and off for each view pane, the virtual functions FBRendererCallback::Attach() and FBRendererCallback::Detach() are called respectively. The FBRendererCallback::DetachDisplayContext() is called when the custom renderer is switched off and the view panes no longer refer to it. It is a good time to release the allocated CPU and GPU resources to save the memory.

Plugin Capacity Provision Negotiation

Following are the additional properties declared in the FBRendererCallback class.

//! Open Reality renderer callback interface
class FBSDK_DLL FBRendererCallback : public FBComponent 
{
    ...

    //! Can this Renderer Callback support IDBuffer picking.
    FBPropertyBool SupportIDBufferPicking;                    
    
    //! Set true to use default camera front plate rendering method; set false to disable it.
    FBPropertyBool DefaultCameraFrontPlateRendering;        

    //! Set true to use default camera back plate rendering method; set false to disable it.
    FBPropertyBool DefaultCameraBackPlateRendering;         

    //! Set true to use default light ground projection rendering method; set false to disable it.
    FBPropertyBool DefaultLightGroundProjectionRendering;   

    //! Set true to use default light volume rendering method; set false to disable it.
    FBPropertyBool DefaultLightVolumeRendering;             
}

The plug-in developer must adjust the default values of these additional properties according to their custom renderer’s capacity or choice of implementation.

Using these additional properties, MotionBuilder can dynamically adjust its runtime behaviour according to the current choice of renderer. For example, if you set SupportIDBufferPicking to be false, picking in Transparency Selection mode does not work as expected, and instead MotionBuilder performs the default GL selection buffer based picking. The plug-in capacity provision negotiation approach gives the plug-in developer a certain level of freedom to ignore some features (which are either hard to fit into their implementation or not particularly useful in their workflow) to meet their development deadlines.

The Transparency Selection mode (as shown in the following figure) works only when the currently chosen custom renderer’s SupportIDBufferPicking property is true.

The plug-in developers can draw the 2D plates (light ground project, volume rendering, camera front and back plates) in the viewer and bypass the default functionality provided by MotionBuilder to meet their unique visual quality requirements. These additional properties are off by default.

Iterative Forward Rendering Approach

In the sample, two simple approaches are demonstrated. The first is simple and uses the following forward rendering approach:

  1. Set up the global GL shading parameters (frame buffer, light, and others).
  2. Iterate through each displayable model. If model is visible inside the camera’s frustum:
    1. Set up the model’s transform matrix.
    2. Set up the model’s shading parameters (shader, material, texture, and others).
    3. Draw the geometry.

This logic is illustrated in the ORCustomRendererCallback::RenderWithSimpleIterating() function in the sample. While this approach is sufficient for many scenes and easy to implement, it has an inherent small batch rendering issue and cannot handle the large scale fill rate limited scene effectively.

Cached Scene Graph Approach

A large amount of information is changing per frame because MotionBuilder gathers, simulates, and displays the dynamic virtual world. However, a large portion of information remains unchanged across multiple frames. Therefore, many optimization methods can be implemented to accelerate the rendering performance and improve the visual fidelity. For example, you can sort the render items by different criteria (material, texture, vertex buffer array, and others) to reduce the frequent OpenGL state changes, construct spatial acceleration data structure to iterate only the visible models, and apply deferred shading strategy (see http://en.wikipedia.org/wiki/Deferred_shading).

To implement these optimization methods, you need to gain a good understanding of the MotionBuilder’s ORSDK. You must listen to the internal events (connection change, data changes, and others) and cache the scene’s rendering information accordingly into your choice of data structure to achieve high rendering performance and improved visual fidelity.

In the sample, the ORCustomRendererCallback::CacheSceneGraph property toggles between the iterative forward rendering and cached scene graph approaches. The plug-in developers can refer to the several functions in the sample and gain a basic understanding of how to listen and cache the rendering information. Following are the relevant functions in the CustomRenderer sample that show how to listen to the internal scene changes and build a cached scene graph.

void EventSceneChange(HISender pSender, HKEvent pEvent);
void EventConnNotify(HISender pSender, HKEvent pEvent);
void EventConnStateNotify(HISender pSender, HKEvent pEvent);
void EventConnDataNotify(HISender pSender, HKEvent pEvent);

If you work with scenes (animate models, change properties value, connect/disconnect material and texture, and so on), and observe the output of the functions in the console, you can quickly gain a basic understanding of how to listen to the internal change events, which are relevant to rendering.