Knowledge Format Fundamentals — Single Precision (FP32) vs Half Precision (FP16)
Now, let’s take a more in-depth take a look at FP32 and FP16 codecs. The FP32 and FP16 are IEEE codecs that symbolize floating numbers utilizing 32-bit binary storage and 16-bit binary storage. Each codecs comprise three components: a) an indication bit, b) exponent bits, and c) mantissa bits. The FP32 and FP16 differ within the variety of bits allotted to exponent and mantissa, which end in completely different worth ranges and precisions.
How do you exchange FP16 and FP32 to actual values? Based on IEEE-754 requirements, the decimal worth for FP32 = (-1)^(signal) × 2^(decimal exponent —127 ) × (implicit main 1 + decimal mantissa), the place 127 is the biased exponent worth. For FP16, the components turns into (-1)^(signal) × 2^(decimal exponent — 15) × (implicit main 1 + decimal mantissa), the place 15 is the corresponding biased exponent worth. See additional particulars of the biased exponent worth right here.
On this sense, the worth vary for FP32 is roughly [-2¹²⁷, 2¹²⁷] ~[-1.7*1e38, 1.7*1e38], and the worth vary for FP16 is roughly [-2¹⁵, 2¹⁵]=[-32768, 32768]. Notice that the decimal exponent for FP32 is between 0 and 255, and we’re excluding the biggest worth 0xFF because it represents NAN. That’s why the biggest decimal exponent is 254–127 = 127. An analogous rule applies to FP16.
For the precision, be aware that each the exponent and mantissa contributes to the precision limits (which can also be known as denormalization, see detailed dialogue right here), so FP32 can symbolize precision as much as 2^(-23)*2^(-126)=2^(-149), and FP16 can symbolize precision as much as 2^(10)*2^(-14)=2^(-24).
The distinction between FP32 and FP16 representations brings the important thing issues of blended precision coaching, as completely different layers/operations of deep studying fashions are both insensitive or delicate to worth ranges and precision and should be addressed individually.