Keeping track of which areas remain unexplored

Hello fellow Unity users,

I would like a way to keep track of where the player has explored. This is similar to the fog - of - war problem but I’m not concerned with visualising the fog. I only want to know what areas have not yet been explored so that I can spawn an item in an unexplored area.

I need several things:

  • a data structure to store the information (mesh, texture, collection of game objects, 2D array of booleans…)
  • an efficient way to modify the data as the player moves around (raycast, projector, camera, iterating, triggers…)
  • a way to map explored/unexplored areas in the data structure to world coordinates (InverseTransform, Transform, basic offset and scale…)

This is part of an experimental study. The game is 3D and first person. The player is searching for an item in a thick fog. I need them to search for a few minutes without getting lucky, but I also don’t want them to be mad at me if the item appears in a place they have already inspected or if it doesn’t exist at all. The levels are big enough that they take several minutes to explore (50x50m). I would like a resolution of about 0.5 to 1m. So I need a data structure that contains 2500 to 10000 elements. I’ve thought of several options but none are quite satisfactory either in performance or complexity (ie., they seem so complicated that they can’t be right). Here are some ideas which aren’t quite fully formed (sorry, lists within lists look ok in the preview but get flattened in the actual post, trying to work around it):

Using a black texture and painting it white as the player moves around (in a layer hidden from the main camera).

  • Not sure how to create a texture of the right dimensions (I didn’t do the 3D model of the environment and measuring distances in the Unity Editor is tricky).
  • Iterating over all pixels and comparing distances would be slow.

A projector attached to the player to create a circular patch around them.

  • Doesn’t record the data anywhere.

A camera which sees only a white disc around the player. By setting the No Clear flag and getting the camera to render to a texture, that texture should remain white wherever the player has been.

  • Camera needs to see the whole level from above. Not very scalable.
  • Need to convert from the render texture space to camera viewport space to world space.
  • It somehow doesn’t feel right to use a camera for this because it’s all about “invisible data” rather than something that we want to “see”.

Using a mesh and updating the vertices near the player.

  • Performance issues if I need to iterate over every vertex to determine which ones need to be updated.
  • Using a sparser mesh doesn’t give me the required resolution.

Using a plain array of bools.

  • Need to convert array indexes into world coordinates.
  • Iterating over array and computing distances will lead to poor performance.

Creating a grid of empty game objects at startup and deleting them when a sphere collider attached to the player collides with them.

  • Takes the best part of a minute to instantiate all the game objects and seems to slow down the game even if I configure the physics engine to only look for collisions between the grid objects and the player.
  • Maybe I need an editor script to create the grid objects so it gets loaded with the rest of the scene at startup (just thought of that option).

I’ve read answers to related questions which suggest creating a grid, and possibly using a hierarchy with large grid squares (or hexagons, or triangles) and smaller squares inside them which only do their detection when the parent detects that the player is close enough. Unfortunately, I can’t figure out how this should be done in practice.

Apologies for the long post and lack of code. Hopefully, you can give some insight into how to tackle this problem or at least comment on the solutions I’ve been toying with.

Many thanks.

– Carl

Not entirely solved but I’ve made a start.

Instantiating a grid when the game starts is very slow, so I wrote a simple editor script to do this. It takes the coordinates of two dummy objects as the corners, and a grid spacing parameter, and it creates a regular grid of nodes between the corners. The nodes are simple game objects with a small sphere collider on them and a special tag. I parent the nodes to an empty GameObject to keep the hierarchy tidy. This can take a minute or so but then I save the grid as part of the scene. I can also delete grid nodes that are inside walls for instance.

The player has a child GameObject with a trigger sphere collider. Its radius is equal to the farthest distance the player can see through the fog. The OnTriggerEnter function checks whether the triggering object is a grid node. If it is, the grid node is deactivated (which I assume is faster than destroying it). Both the sphere collider attached to the player, and the grid nodes, are assigned to a special layer and the physics engine is configured so that objects in this layer only collide with other objects in the same layer. This reduces the number of unwanted collisions.

This works but still seems to cause some jerkiness in the movement of the player as they move over grid nodes. It can be improved by increasing the spacing of the grid but this makes it less accurate.

When I need to find a location that has not yet been explored, I can look under the grid parent object and select any of the nodes under it that are still active.

Screenshot showing grid in action