ZBuffer and Object Depth

I can easily retrieve the depth of an object using camera’s depth but this is dependent of the camera’s z position. I would like to “normalize” the depth and not to be linked with the camera distance from the object.

Here is a simple example of applying depth of an object but this is view dependent. Maybe there’s a matrix trick to normalize the view position.

void vert(inout appdata_full v, out Input o)
{
    UNITY_INITIALIZE_OUTPUT(Input, o);
    o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
    o.screenPos = ComputeScreenPos(o.pos);
}
 
void surf(Input IN, inout MyOutput OUT)
{
    half4 c = tex2D (_MainTex, IN.uv_MainTex);
    OUT.Albedo = c.rgb;
    OUT.Alpha = c.a;
 
    float Depth = Linear01Depth(tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(IN.screenPos)).r);
    half4 edgeBlendFactors = half4(1.0, 0.0, 0.0, 0.0);
    edgeBlendFactors = saturate((Depth - IN.screenPos.w + _A) * _B); // _A = 7 and _B = 0.2
    Depth = lerp(1.0, 0.0, edgeBlendFactors);  
 
    OUT.Albedo = Depth;
}

Thank you !

I’M not sure what you actually want to know. The “depth” as seen on the screen is relative to the camera as the camera represents the projection matrix. The camera is the view point. Your view starts at the camera’s position. There is nothing like a “global” or “worldspace” view as this would mean your camera is at 0,0,0 and not rotated.