How can two frames be interpolated in real time?

Is it possible to take two photos (same static scene, but the cameras are offset just enough for parallax to be apparent), and generate a photo during runtime by interpolating any point between the two camera positions?

My ultimate goal is to bake an extremely high quality Unity scene into a stereoscopic 360 skybox for VR. 3D 360 content already exists (in ample volumes, even); what’s different about my proposed version is to have the parallax effect preserved in the photo/video being viewed, which IMO is the biggest that actually rendered static scenes still exclusively have.

That goal sounds incredibly complex, but the essence of the process can be narrowed down to simply interpolating or “tweening” between two photos of the same scene. At my current level of familiarity with the game engine however, I don’t know if even that can possibly be done in real time, which is shy I’d like to hear from all of you more experienced developers: can this be done?

If realtime frame interpolation can be done (and I do hope that it can), how might the effect be programmed? My initial sneaking suspicion is that shaders will have a big part to play; am I on the right track?

VR is already supported in Unity and the kind of thing you are looking to do is taken care of (mostly) for you automagically.

If you are trying to do something else, like render both images into a single image… then yes you can have two cameras and yes you can alter the way they are rendered in the scene or on the screen. I’m not sure you can have both at the exact same layer though, you have to pick one to be in front of the other and as Cherno said you can adjust the alpha of whichever is in front to fade between the two images.

RenderTextures may also help you, but they have poorer performance than rendering directly to the screen.