How to use double-precision floats in a shader

I want to build a platform for exploring 3D fractals in Unity. These are usually drawn using raycasting and distance marching, so all the interesting algorithms are going to be implemented in shader code. I’ve done some experimenting and it seems Unity will fit the bill, but I would really like to do the computations using 64-bit double-precision floats, instead of the default 32-bit ones.

Is there any way I can specify double-precision floats for a piece of shader code? I just need a few specific inner loops to run with double precision, while everything else - input parameters, output values, etc. can be standard floats. I’m obviously aiming to run my code on mid/high-end desktop GPUs.

Well, GPUs are designed around the 32 bit floating point type. The support depends on the used shader language. Also for example the cg language “allows profiles to treat double as float”, so using double in your shader code does not necessarily mean the GPU conducts calculations in double precision. Only a few GPUs actually have hardware support for double. Apart from having support for double, certain calculations might still be calculated in SP (single precision) like most transcendental operations (sqrt, “e^x”, pow, log, sin, cos, …).

So unless you are sure that you have a specific hardware that actually supports double precision floats, it’s probably not worth using them. Again GPUs are optimised for 32 bit floating point. It might even “elevate” half or fixed variables to float or “degrade” double to float.