Camera DepthNormals Issues

I am making an SSAO shader, my approach requires comparing the angle of a vector that points to the sample and the surface normal vector in screen space. Like so:

Using the _CameraDepthTexture, and reconstructing the normal from the depth, gives good results. But creates an artifact around the outlines of objects due to the high difference in depth there. Reconstructing the depth also requires additional samples and is expensive.

Using _CameraDepthNormalsTexture gives a depth value with severe precision issues, and a normal that is not relative to the view, causing massive artifacts toward the side of the screen. Combining just the normals from this texture with the regular depth texture gives completely unexpected results. The two don’t appear to be in the same space at all.

This thread gives a way to convert screen space coordinates into whatever space the normals are in. From some Takahashi fellow at Unity Japan. Since I’m not well versed in projection matrices and the documentation is silent on this matter, there’s absolutely no way I’d ever have figured something like this by myself, just to do a basic task. What an oversight! Using this works but just adds more precision issues, presumably because the normals are also packed.

I could render my own normals pass and make sure they are in the space I need, but that seems potentially expensive. I’m presuming DepthNormals does both in one pass efficiently. I’d like it to work in forward rendering so fetching stuff off the G-buffer is a no go. Assuming the G-buffer normals are more well behaved in the first place that is.

What do?

Okay so basically the problem is that the UV coordinates and screen depth are in screen space, but the COMPUTE_VIEW_NORMAL function only rotates the normals into view space. I tried multiple approaches I found to de-projecting the screen space coordinates back into view space but none of them were any good. So I gave up and rendered a fullscreen buffer containing the view space position of that pixel and used that value directly instead of trying to de-project anything. The extra pass for generating that clocks in at under a millisecond for a basic scene so that seems alright. Really not optimal but it works and I can set the render target to any precision I need.