Getting the correct object level depth in shader

Hey people. So I am trying to write a shader that will render a tilted, 45°, billboard sprite at what would be (0,y,0,) of the model transform.

The reason for doing this is that we are using 2D sprites in a 3D enviorment angled to look isometric. Using 2D sprites requires angeling them at 45° to look correct. This results in either normal sized colliders and clipping or stupidly large colliders.

From my limited knowledge of shaders and scouring forums I have figured out how to write to the depth shader but, as you can see in the image, the sprite renders correctly when in front of a 3D object, but incorrectly when behind a 3D object.

Behind 3D object but rendering on top.

In front of 3D and clipping but rendering correctly.

So, how do I get the correct depth value to render this sprite as if it were not tilted 45°?

    SubShader 
	{
		Tags 
		{ 	
			"Queue"="AlphaTest" 
			"IgnoreProjector"="True" 
			"RenderType"="TransparentCutout" 
			"PreviewType"="Plane"
			"CanUseSpriteAtlas"="True"

		}
			Lighting Off
			ZWrite On
			Cull Off
			Blend One OneMinusSrcAlpha
		
		Pass 
		{

		CGPROGRAM
			#pragma vertex vert
			#pragma fragment frag
			#pragma multi_compile _ PIXELSNAP_ON
			#pragma shader_feature ETC1_EXTERNAL_ALPHA
			#include "UnityCG.cginc"

			struct appdata
			{
			  float4 vertex    : POSITION;  
			  float3 normal    : NORMAL;   
			  float4 texcoord  : TEXCOORD0; 
			  float4 texcoord1 : TEXCOORD1; .
			  float4 tangent   : TANGENT;   
			  float4 color     : COLOR;  
			};

		   	struct v2f 
		   	{
            	float4 pos : POSITION0;
       			float4 objpos : POSITION1;
				fixed4 color    : COLOR;
				float2 texcoord  : TEXCOORD0;
			};

			//added
			fixed4 _Color;

			v2f vert (appdata v) 
			{
               v2f o;

               float4 c = mul(_Object2World, float4(0.0, 0.0, 0.0, 1.0));
               float4 ws = mul(_Object2World, v.vertex);

               ws.x = c.x + length(float2(ws.x - c.x, ws.z - c.z));
           	   ws.z = c.z;
 	
               o.objpos = mul(UNITY_MATRIX_VP, ws);

               o.pos = mul(UNITY_MATRIX_MVP, v.vertex);

               o.texcoord = v.texcoord;
               o.color = v.color * _Color;

               return o;
             }

            sampler2D _MainTex;
			sampler2D _AlphaTex;

			fixed4 SampleSpriteTexture (float2 uv)
			{
				fixed4 color = tex2D (_MainTex, uv);

				#if ETC1_EXTERNAL_ALPHA
				// get the color from an external texture (usecase: Alpha support for ETC1 on android)
				color.a = tex2D (_AlphaTex, uv).r;
				#endif //ETC1_EXTERNAL_ALPHA
				return color;
			}
			
			struct Output 
			{
				float4 col:COLOR;
				float dep:DEPTH;
			};   
		
			Output frag( v2f i ) 
			{
	        	Output o;
	        	o.dep = i.objpos.z / i.objpos.w;
	        	o.col = SampleSpriteTexture (i.texcoord) * i.color;
	        	o.col.rgb *= o.col.a;
	        	return o;
	        }
	      
		ENDCG
		}
	}

I’m actually working on this exact issue myself right now! Were you ever able to fix it, @Pranaryx ?

I don’t know much about shaders myself so I can’t tell what’s going on in yours, but going by the pictures and your description of the situation it seems like you may be rendering each pixel of the sprite at the same depth value, possibly even the top? If I understand a friend’s advice correctly it needs to calculate the coordinates of the quad’s verts as if the sprite was standing upright, perpendicular to the floor, then calculate the depth relative to the camera angle accordingly… though since we’re billboarding we probably only need to worry about the central vertical line.

Some quick research brought me to rendering the camera’s depth texture as a possible means of troubleshooting this stuff. Can’t try it myself right now, but I imagine this should help show how the depth of the sprite gradates relative to the walls and floor.

EDIT: Yeah, something like this.