What is the correct way to do:
var myfloat = 0.123432;
var myint : int;
myint = myfloat * 32767;
is it a typecast? it’s to convert a unity float audio from 0-1 to a 16 bit integer from 0-32767, same as on a CD.
What is the correct way to do:
var myfloat = 0.123432;
var myint : int;
myint = myfloat * 32767;
is it a typecast? it’s to convert a unity float audio from 0-1 to a 16 bit integer from 0-32767, same as on a CD.
The correct formula would be round(myfloat*32768)
. To convert back you just divide by 32768. Notice 8 at the end in both cases, it will be more precise as rounding function will (in first case) take care of 0 if result of myfloat*32768 is still smaller than 0.5, it will never actually reach 32768 as floating point precision is unpredictable.
Perhaps something like:
floatArr = BitConverter.ToSingle(array, i*4) / 0x80000000;
from
c# - create AudioClip from byte[] - Stack Overflow
var myFloat = 0.123432;
var myShort : short = System.Convert.ToInt16 (myFloat * 32767);
But given that it’s a signed 16-bit number, I’m not sure that’s actually right. Maybe it should be:
var myShort : short = System.Convert.ToInt16 (Mathf.Lerp (-32768, 32767, myFloat));