Just for curiosity sake. Is there a reason why most (if not all) Lerp functions in Unity only take in [0, 1] as the range for t to interpolate? Are there performance issues related?
In almost all cases, you don’t want to overshoot/undershoot the t value.
It is, however, occasionally useful.
Here’s an unclamped Lerp, in which case values outside 0-1 would be extrapolation:
public static float Lerp( float a, float b, float t ){
return t*b + (1-t)*a;
}
Alternatively, a function where it’s optional:
public static float Lerp( float a, float b, float t, bool extrapolate = false ){
if( !extrapolate )
t = Mathf.Clamp01( t );
return t*b + (1-t)*a;
}
As for performance, not clamping is more performant compared to clamping.
This is to prevent unintended overshoot. The most common usage of lerp is to move something from a start state to an end state. Allowing values higher then 1 or lower then 0 would cause the result to be outside of the original start and end state.
Its pretty trivial to roll your own lerp function that will let you overshoot.
Since it’s a percentage, only the 0.0 to 1.0 range makes any logical sense. It takes (a tiny bit) of extra CPU time to clamp the value to that range.