Reading mesh data without Unity making a copy

Hello !

I am working on a procedural planet generator for a school project. So for the dynamic LOD I create meshes in run-time.

However I have some functions that need to access the data contained in the terrain meshes, like the height of the vertices or the normals to calculate the slope.
As that data is only stored in the mesh (keeping it in other arrays would double the memory usage, I’m trying to be as efficient as possible) to access it I was always using meshFilter.mesh.vertices or meshFilter.mesh.normals.

But then I read something here about how when one does this, Unity creates a copy of the entire array each time you call it. I guess it’s a security of some sort but here I’m only reading data, not writing in it, so it seems like a waste of performance.

I tried using sharedMesh instead (so the call is meshFilter.sharedMesh.vertices) but it takes the same amount of time to execute the code, so I guess it’s not different ?

So my question is does Unity really make a copy of the mesh arrays each time you access it even with sharedMesh, and if yes is there a way to avoid it without storing it somewhere else ?

Thanks for your time :slight_smile:

There’s no way around that. There are actually several obstacles. The main problem is, that the vertex and index buffers are usually mapped to native system memory by calling “Lock” on a vertex buffer object. Since we are inside a managed environment, all objects we can work with have to be created in managed memory. So a “Vector3” in C# / Mono is always a managed array. So there’s no way to directly access the native memory area that the C++ core of Unity has access to.

Next thing is, even if it would be possible to directly map the returned native pointer to a managed array, it wouldn’t make sense to perform random access to mapped VRAM. That would be slow as hell. In most cases, even when programming in native code, you prepare your data in system memory and quickly copy the data over to the VRAM.

You said

[…] trying to be as efficient as possible

“efficiency” is two-fold in this case. Optimising for memory usage or preformance are two very different things.

VRAM is a special memory that is optimised for the GPU so it has fast read access. Accessing VRAM from the CPU is in general slower. The CPU usually has optimised block write access to quickly copy a chunk of data over.

Unity might actually cache the vertex data in system memory, however when you want to access it, it still has to be marshalled / copied into managed memory.

They recently added methods like SetVertices which allows you to directly pass a generic List to the native code which is handy when you dynamically create / modify the vertex data so you don’t have to additionally copy to a seperate array. Unfortunately there’s no GetVertices method. It would be useful to be able to pass an already allocated managed array to Unity’s native code and have it fill that array instead of creating a new one each time.

So if you need to read the mesh data frequently you should hold a copy in managed memory. Having a large memory foorprint won’t affect performance unless your system runs out of memory and the OS is using a swapfile. Caching that data will for sure increase performance and reduce garbage collection / memory allocation hicups.