Find UV coordinates of mesh without a raycast

I have a scene with a set of very simple procedurally generated meshes (flat, with four vertices each). Each of the meshes has a texture drawn on it.

I also have a player character that moves around underneath these meshes. While the scene is technically in 3D, it is effectively 2D, with a top-down, orthographic camera and no perspective.

What I would like to do is, given the player position, to get the UV coordinates of the point on the mesh the player is currently under. Figuring out if the player is under a specific mesh hasn’t been that hard (using a C# version of http://wiki.unity3d.com/index.php/PolyContainsPoint), but I can’t wrap my head on how to grab the UV coordinates of the mesh.

I know that traditionally you would simply raycast upwards to the mesh and then get the UV coordinates from the raycasthit. However, this isn’t an option because I can’t attach mesh colliders to these meshes (they deform in real-time and continually updating the mesh colliders is too expensive).

I would think this would be possible to do without a raycast, given that everything is effectively 2D, and I know the positions of all the vertices on the mesh, the texture UVs, and the player position. I’ve found some guidance on the forums for the inverse of my problem (converting UV coordinate on a mesh into world coordinates), but nothing for my specific problem (converting world coordinates into UV coordinates on a mesh).

Any help would be appreciated!

Well, that’s quite easy given your restrictions of 2d orthographic projection without rotation. The easiest way is to use barycentric coordinates. All you need i have already posted over here. Given that all your coordinates are within the same plane (just set z = 0 for all positions) you can simply use GetBarycentric() with the vertex positions of the two triangles and use InTriangle to determine in which triangle the given point is. Once you have the barycentric coordinates of your player and know the triangle you can calculate the uv coordinates by simply using

Vector2 uv = Vert1uv * bary.x + Vert2uv * bary.y +Vert3uv * bary.z;

Of course you need to use the same vertex order you used in GetBarycentric.

edit

Here’s a 3d version of GetBarycentric:

public static Vector3 GetBarycentric(Vector3 aV1, Vector3 aV2, Vector3 aV3, Vector3 aP)
{
    Vector3 a = aV2 - aV3, b = aV1 - aV3, c = aP - aV3;
    float aLen = a.x * a.x + a.y * a.y + a.z * a.z;
    float bLen = b.x * b.x + b.y * b.y + b.z * b.z;
    float ab = a.x * b.x + a.y * b.y + a.z * b.z;
    float ca = a.x * c.x + a.y * c.y + a.z * c.z;
    float cb = b.x * c.x + b.y * c.y + b.z * c.z;
    float d = aLen * bLen - ab * ab;
    Vector3 B = new Vector3((aLen * cb - ab * ca), (bLen * ca - ab * cb)) /d;
    B.z = 1.0f - B.x - B.y;
    return B;
}

It’s actually just a couble of dot products, however to save method / operator calls i’ve written those manually so it’s only native float math except for a, b and c.

second edit
I just packed everything into a struct. That way it’s more clear that barycentric coordinates are not some kind of cartesian coordinates. I put it on the wiki. The “in triangle check” is now included and can be tested with a simple property. It also has an Interpolate method which simplifies the usage a little bit. I added constructors for Vector2, Vector3 and Vector4 values as well as Color values. So you can feed a reference color and in determines the position in the triangle based on the given vertex colors. Also note Vector4 is treated like a four-dimensional vector and not like “w” normalized Vector3