How to size an orthographic camera each frame to fit a target mesh/transform? [Solved! Shared code]

I have an orthographic camera displaying in a window in the corner of the main player view, pointed at the player’s target. The player’s targets can be very large meshes, or very small ones. I want the targets to fill as much of the targeting window as possible, regardless of size.

I’ve been looking at the bounding boxes available from meshes, and that seems to be a prototype-acceptable solution… though I haven’t quite optimally mapped the bounding box sizes to my camera’s orthographicSize.

However, my targets can be long along their Z axis and short along their X axis… making for wasted space when the player is viewing them along their Z axis.

I would ideally like my targeting camera to constantly increase/decrease its orthographicSize in order to keep the target mesh fitted fully into the targeting window. Slowing zooming in or out while the player orbits around the target, to waste no space in the targeting window and to make sure the entire target is represented.

Is there something I’m missing? Some way to calculate the maximum length of a mesh when viewed from an arbitrary direction?

Vector3[] points;
Mesh targetMesh;
Transform targetPreviousFrame;

void Update () {
Transform t = ship.sensors.CurrentTarget;

if(t != null){

	// checks to make sure that the target is the same as last frame
	// if changed, will get the new mesh reference and create
	// a copy of the mesh's vertices
	// will perform slowly if there are too many vertices, solution:
	// use an invisible, simplified boundary mesh that envelopes original
	// and turn this into a coroutine
	if(t != targetPreviousFrame){
		targetMesh = t.GetComponent().mesh;
		points = targetMesh.vertices;
	}
	targetPreviousFrame = t;

	float minX = Mathf.Infinity;
	float minY = Mathf.Infinity;
	float maxX = Mathf.NegativeInfinity;
	float maxY = Mathf.NegativeInfinity;
	

	for (int i = 0; i < targetMesh.vertexCount; i++){

		// converting each vertex to world space, and then flattening
		// each to viewport coords
		// The bottom-left of the camera is (0,0); the top-right is (1,1).
		// going through the 8 points and drawing a rectangle around the lot of them			
		points _= camera.WorldToViewportPoint(t.TransformPoint(points*));*_

* // determines the square, in camera size percents, that will contain all vertices *
_ if(points*.x < minX) {
minX = points.x;
}
if(points.y < minY) {
minY = points.y;
}
if(points.x > maxX) {
maxX = points.x;
}
if(points.y > maxY) {
maxY = points.y;
}
}*_

* // distance from bottom corner to top corner of the square, in percentage of current screen size*
* float dist = Mathf.Pow((Mathf.Pow((maxX - minX),2f) + Mathf.Pow((maxY - minY), 2f)), 0.5f);*

* // the viewport coordinates are already a percentage, so can simply multiply by the distance*
_ camera.orthographicSize *= dist;_

_ Vector3 v = shipTransform.position + OrbitRadius * ( targetMesh.bounds.center - shipTransform.position).normalized;_

* transform.position = v;*
* transform.LookAt(targetMesh.bounds.center);*
* }*
* else {*
_ Vector3 v = shipTransform.forward * OrbitRadius;
* transform.position = v;
transform.LookAt(shipTransform.forward);
}*_

Renderer.bounds will give you an axis aligned bounding box in world space. More importantly you can calculate the eight vertex points of the box from this data. Take these eight points and covert them Viewport coordinates using Camera.WorldToViewportPoiint(). Find a bounding rectangle finding the min and max x and y values for all eight points. Viewport points start at (0,0) in the bottom left corner and go to (1,1) in the upper right. Comparing the distance spanning between the minimum and maximum x and y values will tell you what percent your camera currently is with respect to the object. Then modify your orthogramphic size.

I haven’t tried this but it sounds like you’ve already taken the bounding box and then used that to adjust your camera’s orthographic size to match it. To conquer the wasted space a hacky first thought is to use additional collision volumes and raycast against them to determine which axis you need to use to size your camera.

Could you detail what you’ve tried already? Maybe we can mad scientist this.