Call camera.Render() twice: RenderTexture contents differs from ReadPixels()

I am trying to render something to a texture in 2 phases. I’ve created a camera, and in phase (A) I render the contents that some other camera (A) is seeing. After loading some object, in phase (B) I render the contents that another camera (B) is seeing. So basically, I move my capture camera around, looking at what other cameras are looking at, and compositing the results of 2 Render() calls in one RenderTexture.

When I debug the RenderTexture in the Editor, the result seems perfectly fine. But when I call ReadPixels to put the RenderTexture inside a Texture2D, the result is a texture that only contains the results of the first render (A). How can this be possible? Any ideas? I’ve tried waiting until the end of the frame, waiting several frames, etc. but nothing worked…

	mCamera.CopyFrom(Camera.mainCamera);
	mCamera.clearFlags = CameraClearFlags.SolidColor;
	
	//Assign camera to temporary render target
	RenderTexture tTempRT = RenderTexture.GetTemporary(w, h, 16, RenderTextureFormat.RGB565);
	mCamera.targetTexture = tTempRT;
	//Set the temporary render target to the active one.
	RenderTexture.active = mCamera.targetTexture;
	// (A) Then render it.
	mCamera.Render();
	RenderTexture.active = null; // go back to main context
	// Load something else and wait
	LoadSomething();
	while (!mIsReady) {
		yield return null;
	}
	// look at the new stuff we loaded from other camera
	mCamera.CopyFrom(mSomeOtherCamera);
	// Don't clear, because we'll render the frame on top
	mCamera.clearFlags = CameraClearFlags.Nothing;
	//Set the temporary render target to the active one.
	mCamera.targetTexture = tTempRT;
	RenderTexture.active = mCamera.targetTexture;
	// (B) Render the new stuff on top of (A)
	mCamera.Render();
	// at this point, if we wait, and check the RenderTexture in the editor, it contains both (A) and (B)
	//for (int i = 1; i < 1000000000; i++) yield return null;

	// But if we do this, tImage contains only (A), not (B) :(
	RenderTexture.active = mCamera.targetTexture;
	Texture2D tImage = new Texture2D(w, h,TextureFormat.RGB24, false);
	tImage.ReadPixels(new Rect(0, 0, mWidth, mHeight), 0, 0);
	tImage.Apply();
	
	mCamera.targetTexture = null; //Free the camera from render target
	RenderTexture.ReleaseTemporary(tTempRT);
	RenderTexture.active = null; // go back to main context
	
	someCallback(tImage);

Edit:

  • actually, I start the process with

     mCamera.enabled = false
    
  • but in LoadSomething() I set mCamera.enabled = true, that’s why I see the thing (B) rendered in the RenderTexture…

  • If I keep mCamera.enabled = false all the way, I only see the first render (A) in the RenderTexture… the same as the ReadPixels is reading… I don’t understand :frowning:

Ok, there was some mistake that was not detectable just with the code above. Sorry…
The problem was the timing of the 2nd capture:

    LoadSomething();
    while (!mIsReady) {
       yield return null;
    }

This LoadSomething() loads some graphic (FYI, a 2D object rendered using LWF library), and OnPostLoad() it tells us that it finished loading. The problem was that I needed to wait one frame more until the object calls its Render once, where it initializes its transformation matrices.

When I was debugging in the editor, because I was pausing with that busy loop, of course I could see the object, because it was getting updated. But when I was trying to capture immediately, it didn’t appear just yet… That’s all. orz

So, to solve the problem, I manually called the exec of that object once OnPostLoad(). End of the story.
Sorry about the fuss.
I hope the code above is useful anyway.