I want to build a platform for exploring 3D fractals in Unity. These are usually drawn using raycasting and distance marching, so all the interesting algorithms are going to be implemented in shader code. I’ve done some experimenting and it seems Unity will fit the bill, but I would really like to do the computations using 64-bit double-precision floats, instead of the default 32-bit ones.
Is there any way I can specify double-precision floats for a piece of shader code? I just need a few specific inner loops to run with double precision, while everything else - input parameters, output values, etc. can be standard floats. I’m obviously aiming to run my code on mid/high-end desktop GPUs.
So unless you are sure that you have a specific hardware that actually supports double precision floats, it’s probably not worth using them. Again GPUs are optimised for 32 bit floating point. It might even “elevate” half or fixed variables to float or “degrade” double to float.