How can I update the camera orientation faster than the UI loop?

I’m developing a VR application which uses a modified system for tracking the HMD pose (usually a Vive). Most of the time it works nice and well and on par with the native Vive or Oculus support using their respective tracking systems.

It fails when I use very complicated game logic (in my case actually a very complex geometry) where the UI loop takes longer than the HMD’s refresh time to complete. Since I update the HMD pose in the UI loop, I’m only updating at the frequency of the UI loop, and naturally the image becomes sluggish.

Now the thing that makes me wonder if there’s a better way to handle the pose is that for the Oculus and Vive, even if Update, Camera.OnPreCull and Application.OnPreRender are called at lower rates (I measured this, we can go down to below 30fps), the HMD pose or orientation is actually updated at the full 90fps, which makes for a much more pleasant experience.

My question therefore is this: is there any way to modify the pose right before the frame is passed to the GPU for rendering, outside of the UI loop? The Oculus and the Vive are doing something like this, but it seems to happen in the innards of Unity.

Besides the basic UI loop functionalities Update, OnPreCull, OnPreRender, I have tried adding a command Buffer as in

   hmdCamera.AddCommandBuffer(CameraEvent.BeforeForwardOpaque, commandBuffer);
   commandBuffer.SetViewMatrix(myViewMatrix);

and using that to set the camera pose, but to no avail. I didn’t even run into the synchronization issues I was expecting.

Edit since the result is somewhat hidden in this long thread: both the Oculus and the Vive generate fake frames to interpolate between the application-provided frames if the application falls behind and doesn’t provide frame data in time. These frames are what make the native Lighthouse tracking seem more smooth than our own implementation. This feature is outside of the scope of Unity, and therefore my problem cannot be solved in Unity. The question misrepresents the inner working of Unity slightly, because at the time I wrote this I didn’t understand that this is not a Unity feature at all. Even native rendering plugins run at the same framerate as the game code.

Oculus headsets already do what you’re trying to do, and OpenVR does something similar as well, I believe.

It sounds like you have some resource-intensive code that is already decoupled from rendering. If that’s the case, just put it into another thread, and be careful about synchronization. Or just put it into a coroutine and go for the cooperative multitasking route.

If the issue is how much you’re rendering, no amount of finagling will fix the root problem of gpu throughput. At that point, it’s just “render less.”

I looked into writing a Low-level native plugin rendering extensions. Timing the frequency of calls into the plugin with UnityRenderingExtEventType kUnityRenderingExtEventSetStereoTarget or kUnityRenderingExtEventBeforeDrawCall tells me that these calls are also only issued with the frequency of the UI loop.

In other words, it doesn’t seem possible to update the orientation with higher frequency than the UI loop.

One can also do custom blits, so it might be possible to shift the rendered image across the surface of the display in between the correct updates, and thus simulate the effect of a 3D rendering, but besides not knowing Direct3D well enough to know if this is feasible given what I have available, I don’t see the means to synchronize this correctly, either.