we are using the DoF 3.4 image effect in most of our scenes. But because there are known problems with transparent materials not writing to the depth buffer, we are using a 2nd camera that only renders specific layers for this kind of stuff. The 2nd camera is basicaly a duplicate of the main camera and always a child of it. It's stripped from all components, has it's clear flags set to Don't Clear and a higher depth value than the main camera.
The problem: When the 2nd camera is active, it somehow interfers with the image effects on the main camera. For example the DoF effect just doesn't work. But we have other effects on our cameras too, like SSAO and Bloom effects and they don't work right too.
Now for the really creapy thing: I started adding some other image effects (like the Fisheye), configured them to don't actually do anything and started de-/activating some of them. And with some additional effects, everything seems to work. So I wrote an effect that doesn't do anything else than doing a simple blit. And that seems to work sometimes, too.
Some additional info: We need to use deferred rendering, because we need all the shadows we can get. Every image effect on the main camera is working when we set the 2nd camera to Forward rendering. But we can't use that, because we need to share the depth buffer between the two cameras so the objects rendered by the 2nd camera get occluded by objects in the front. We might even have more additional cameras that render different kind of stuff (for example a selective glow effect). We are using Unity 3.5.5, target platform is Windows PC.
For clarification I've put together a simple example project. It's showing the configuration of both cameras. There's the DoF effect attached to the main camera and the simple blit-effect I mentioned is actually attached 2 times to make it work. The funny thing is, the DoF is working when both the PostFix effects are active, but when you just deactivate one of them, it stops working. The link to the project: https://skydrive.live.com/redir?resid=614466D76208072C!155
The question: Can anyone explain that to me? I really don't understand what's going on there, why is the 2nd camera interfering with the image effects and why are they working when there are other effects attached to the camera that don't do anything? And most importantly, is there a real fix for this problem, are we doing anything wrong? Or do we have to live with this workaround that can break every time we change anything on our cameras?
I really hope someone can help us. Thanks in advance.
I'm not sure what problems with transparent materials not writing to the depth buffer are you referring to. You can modify a shader that does blending to also write to the depth buffer, but it usually makes sense only if transparent objects do not intersect with each other.
Assuming that you need the second camera, your workaround is unfortunately the fastest one currently possible.
When using deferred lighting we force rendering into a RenderTexture, let's call that texture firstRT.
firstRT - If there are no image effects - we blit from firstRT to screen - If there is one image effect - we call OnRenderImage with firstRT as source and screen as destination - you are responsible for blitting from source to destination with your material of choice, as per usual - If there are two image effects - we call OnRenderImage with firstRT as source and secondRT as destination <= secondRT has the processed image, firstRT doesn't - we blit from secondRT to screen - If there are three image effects - we call OnRenderImage with firstRT as source and secondRT as destination <== secondRT has the processed image - we call OnRenderImage with secondRT as source and firstRT as destination <== firstRT has the processed image, yay! - we blit from firstRT to screen
If there is a second camera that uses deferred it also forces rendering to a render texture, coincidentally that texture is firstRT. So if you want the output of the second deferred camera to show up on top of first camera's output you need to make sure that firstRT has the right contents, by placing dummy image effects on the first camera... for now.
We will of course fix that and gfx tests will be added to cover this case.
answered Nov 02, 2012 at 12:58 AM
Here's what I did to your project.
a) Created a render texture called rt and made it the target for your child camera.
b) Created a shader to do a blend:
c) Created a material called blend that uses the blend shader.
d) Disabled the child camera in the inspector.
e) Set the child camera to render to rt.
f) Set the camera to clear using a skybox.
g) Removed your postfix scripts from the main camera.
h) Created a new script:
i) Added this script to the main camera, and hooked up the child camera and render target to it.
Now, the main camera causes the child camera to render to a render target. This render target is then blended over the top of the results from the main camera render. I know that the shader I use is probably not exactly what you want, it's just to show the sequence of passes happening how you want. The render target should also be created in code, so it can be sized the same as the main camera.
I have submitted a bug report using your original project, since I think the results you get are not expected. I think OnRenderImage is not being called when we think it is. The docs talk about an "ImageEffectOpaque attribute which enables image effects to be executed before the transparent render passes". I wonder if the OnRenderImage callbacks are
answered Oct 25, 2012 at 10:42 AM
Graham Dunnett ♦♦