Base ten decimal number 10 110 101 110 001 101.110 11 converted to 32 bit single precision IEEE 754 binary floating point standard

How to convert the decimal number 10 110 101 110 001 101.110 11(10) to 32 bit single precision IEEE 754 binary floating point (1 bit for sign, 8 bits for exponent, 23 bits for mantissa)

1. First, convert to binary (base 2) the integer part: 10 110 101 110 001 101. Divide the number repeatedly by 2, keeping track of each remainder, until we get a quotient that is equal to zero:

• division = quotient + remainder;
• 10 110 101 110 001 101 ÷ 2 = 5 055 050 555 000 550 + 1;
• 5 055 050 555 000 550 ÷ 2 = 2 527 525 277 500 275 + 0;
• 2 527 525 277 500 275 ÷ 2 = 1 263 762 638 750 137 + 1;
• 1 263 762 638 750 137 ÷ 2 = 631 881 319 375 068 + 1;
• 631 881 319 375 068 ÷ 2 = 315 940 659 687 534 + 0;
• 315 940 659 687 534 ÷ 2 = 157 970 329 843 767 + 0;
• 157 970 329 843 767 ÷ 2 = 78 985 164 921 883 + 1;
• 78 985 164 921 883 ÷ 2 = 39 492 582 460 941 + 1;
• 39 492 582 460 941 ÷ 2 = 19 746 291 230 470 + 1;
• 19 746 291 230 470 ÷ 2 = 9 873 145 615 235 + 0;
• 9 873 145 615 235 ÷ 2 = 4 936 572 807 617 + 1;
• 4 936 572 807 617 ÷ 2 = 2 468 286 403 808 + 1;
• 2 468 286 403 808 ÷ 2 = 1 234 143 201 904 + 0;
• 1 234 143 201 904 ÷ 2 = 617 071 600 952 + 0;
• 617 071 600 952 ÷ 2 = 308 535 800 476 + 0;
• 308 535 800 476 ÷ 2 = 154 267 900 238 + 0;
• 154 267 900 238 ÷ 2 = 77 133 950 119 + 0;
• 77 133 950 119 ÷ 2 = 38 566 975 059 + 1;
• 38 566 975 059 ÷ 2 = 19 283 487 529 + 1;
• 19 283 487 529 ÷ 2 = 9 641 743 764 + 1;
• 9 641 743 764 ÷ 2 = 4 820 871 882 + 0;
• 4 820 871 882 ÷ 2 = 2 410 435 941 + 0;
• 2 410 435 941 ÷ 2 = 1 205 217 970 + 1;
• 1 205 217 970 ÷ 2 = 602 608 985 + 0;
• 602 608 985 ÷ 2 = 301 304 492 + 1;
• 301 304 492 ÷ 2 = 150 652 246 + 0;
• 150 652 246 ÷ 2 = 75 326 123 + 0;
• 75 326 123 ÷ 2 = 37 663 061 + 1;
• 37 663 061 ÷ 2 = 18 831 530 + 1;
• 18 831 530 ÷ 2 = 9 415 765 + 0;
• 9 415 765 ÷ 2 = 4 707 882 + 1;
• 4 707 882 ÷ 2 = 2 353 941 + 0;
• 2 353 941 ÷ 2 = 1 176 970 + 1;
• 1 176 970 ÷ 2 = 588 485 + 0;
• 588 485 ÷ 2 = 294 242 + 1;
• 294 242 ÷ 2 = 147 121 + 0;
• 147 121 ÷ 2 = 73 560 + 1;
• 73 560 ÷ 2 = 36 780 + 0;
• 36 780 ÷ 2 = 18 390 + 0;
• 18 390 ÷ 2 = 9 195 + 0;
• 9 195 ÷ 2 = 4 597 + 1;
• 4 597 ÷ 2 = 2 298 + 1;
• 2 298 ÷ 2 = 1 149 + 0;
• 1 149 ÷ 2 = 574 + 1;
• 574 ÷ 2 = 287 + 0;
• 287 ÷ 2 = 143 + 1;
• 143 ÷ 2 = 71 + 1;
• 71 ÷ 2 = 35 + 1;
• 35 ÷ 2 = 17 + 1;
• 17 ÷ 2 = 8 + 1;
• 8 ÷ 2 = 4 + 0;
• 4 ÷ 2 = 2 + 0;
• 2 ÷ 2 = 1 + 0;
• 1 ÷ 2 = 0 + 1;

3. Convert to binary (base 2) the fractional part: 0.110 11. Multiply it repeatedly by 2, keeping track of each integer part of the results, until we get a fractional part that is equal to zero:

• #) multiplying = integer + fractional part;
• 1) 0.110 11 × 2 = 0 + 0.220 22;
• 2) 0.220 22 × 2 = 0 + 0.440 44;
• 3) 0.440 44 × 2 = 0 + 0.880 88;
• 4) 0.880 88 × 2 = 1 + 0.761 76;
• 5) 0.761 76 × 2 = 1 + 0.523 52;
• 6) 0.523 52 × 2 = 1 + 0.047 04;
• 7) 0.047 04 × 2 = 0 + 0.094 08;
• 8) 0.094 08 × 2 = 0 + 0.188 16;
• 9) 0.188 16 × 2 = 0 + 0.376 32;
• 10) 0.376 32 × 2 = 0 + 0.752 64;
• 11) 0.752 64 × 2 = 1 + 0.505 28;
• 12) 0.505 28 × 2 = 1 + 0.010 56;
• 13) 0.010 56 × 2 = 0 + 0.021 12;
• 14) 0.021 12 × 2 = 0 + 0.042 24;
• 15) 0.042 24 × 2 = 0 + 0.084 48;
• 16) 0.084 48 × 2 = 0 + 0.168 96;
• 17) 0.168 96 × 2 = 0 + 0.337 92;
• 18) 0.337 92 × 2 = 0 + 0.675 84;
• 19) 0.675 84 × 2 = 1 + 0.351 68;
• 20) 0.351 68 × 2 = 0 + 0.703 36;
• 21) 0.703 36 × 2 = 1 + 0.406 72;
• 22) 0.406 72 × 2 = 0 + 0.813 44;
• 23) 0.813 44 × 2 = 1 + 0.626 88;
• 24) 0.626 88 × 2 = 1 + 0.253 76;

6. Adjust the exponent in 8 bit excess/bias notation and then convert it from decimal (base 10) to 8 bit binary, by using the same technique of repeatedly dividing by 2:

180(10)

• division = quotient + remainder;
• 180 ÷ 2 = 90 + 0;
• 90 ÷ 2 = 45 + 0;
• 45 ÷ 2 = 22 + 1;
• 22 ÷ 2 = 11 + 0;
• 11 ÷ 2 = 5 + 1;
• 5 ÷ 2 = 2 + 1;
• 2 ÷ 2 = 1 + 0;
• 1 ÷ 2 = 0 + 1;

0 - 1011 0100 - 000 1111 1010 1100 0101 0101

(32 bits IEEE 754)

• 0

31

• 1

30
• 0

29
• 1

28
• 1

27
• 0

26
• 1

25
• 0

24
• 0

23

• 0

22
• 0

21
• 0

20
• 1

19
• 1

18
• 1

17
• 1

16
• 1

15
• 0

14
• 1

13
• 0

12
• 1

11
• 1

10
• 0

9
• 0

8
• 0

7
• 1

6
• 0

5
• 1

4
• 0

3
• 1

2
• 0

1
• 1

0

Latest decimal numbers converted from base ten to 32 bit single precision IEEE 754 floating point binary standard representation

 10 110 101 110 001 101.110 11 = 0 - 1011 0100 - 000 1111 1010 1100 0101 0101 Aug 25 05:21 UTC (GMT) 3.8 = 0 - 1000 0000 - 111 0011 0011 0011 0011 0011 Aug 25 05:21 UTC (GMT) 21.125 = 0 - 1000 0011 - 010 1001 0000 0000 0000 0000 Aug 25 05:20 UTC (GMT) 10.85 = 0 - 1000 0010 - 010 1101 1001 1001 1001 1001 Aug 25 05:20 UTC (GMT) -3.15 = 1 - 1000 0000 - 100 1001 1001 1001 1001 1001 Aug 25 05:20 UTC (GMT) 240.230 468 75 = 0 - 1000 0110 - 111 0000 0011 1011 0000 0000 Aug 25 05:20 UTC (GMT) 6.456 = 0 - 1000 0001 - 100 1110 1001 0111 1000 1101 Aug 25 05:20 UTC (GMT) 0.437 5 = 0 - 0111 1101 - 110 0000 0000 0000 0000 0000 Aug 25 05:20 UTC (GMT) 9.55 = 0 - 1000 0010 - 001 1000 1100 1100 1100 1100 Aug 25 05:20 UTC (GMT) 1 062.5 = 0 - 1000 1001 - 000 0100 1101 0000 0000 0000 Aug 25 05:20 UTC (GMT) 57.25 = 0 - 1000 0100 - 110 0101 0000 0000 0000 0000 Aug 25 05:20 UTC (GMT) 10.024 920 89 = 0 - 1000 0010 - 010 0000 0110 0110 0001 0011 Aug 25 05:19 UTC (GMT) 38.3 = 0 - 1000 0100 - 001 1001 0011 0011 0011 0011 Aug 25 05:19 UTC (GMT) All base ten decimal numbers converted to 32 bit single precision IEEE 754 binary floating point

How to convert decimal numbers from base ten to 32 bit single precision IEEE 754 binary floating point standard

Follow the steps below to convert a base 10 decimal number to 32 bit single precision IEEE 754 binary floating point:

• 1. If the number to be converted is negative, start with its the positive version.
• 2. First convert the integer part. Divide repeatedly by 2 the base ten positive representation of the integer number that is to be converted to binary, until we get a quotient that is equal to zero, keeping track of each remainder.
• 3. Construct the base 2 representation of the positive integer part of the number, by taking all the remainders of the previous dividing operations, starting from the bottom of the list constructed above. Thus, the last remainder of the divisions becomes the first symbol (the leftmost) of the base two number, while the first remainder becomes the last symbol (the rightmost).
• 4. Then convert the fractional part. Multiply the number repeatedly by 2, until we get a fractional part that is equal to zero, keeping track of each integer part of the results.
• 5. Construct the base 2 representation of the fractional part of the number by taking all the integer parts of the previous multiplying operations, starting from the top of the constructed list above (they should appear in the binary representation, from left to right, in the order they have been calculated).
• 6. Normalize the binary representation of the number, by shifting the decimal point (or if you prefer, the decimal mark) "n" positions either to the left or to the right, so that only one non zero digit remains to the left of the decimal point.
• 7. Adjust the exponent in 8 bit excess/bias notation and then convert it from decimal (base 10) to 8 bit binary, by using the same technique of repeatedly dividing by 2, as shown above:
Exponent (adjusted) = Exponent (unadjusted) + 2(8-1) - 1
• 8. Normalize mantissa, remove the leading (leftmost) bit, since it's allways '1' (and the decimal sign if the case) and adjust its length to 23 bits, either by removing the excess bits from the right (losing precision...) or by adding extra '0' bits to the right.
• 9. Sign (it takes 1 bit) is either 1 for a negative or 0 for a positive number.

Example: convert the negative number -25.347 from decimal system (base ten) to 32 bit single precision IEEE 754 binary floating point:

• 1. Start with the positive version of the number:

|-25.347| = 25.347

• 2. First convert the integer part, 25. Divide it repeatedly by 2, keeping track of each remainder, until we get a quotient that is equal to zero:
• division = quotient + remainder;
• 25 ÷ 2 = 12 + 1;
• 12 ÷ 2 = 6 + 0;
• 6 ÷ 2 = 3 + 0;
• 3 ÷ 2 = 1 + 1;
• 1 ÷ 2 = 0 + 1;
• We have encountered a quotient that is ZERO => FULL STOP
• 3. Construct the base 2 representation of the integer part of the number by taking all the remainders of the previous dividing operations, starting from the bottom of the list constructed above:

25(10) = 1 1001(2)

• 4. Then convert the fractional part, 0.347. Multiply repeatedly by 2, keeping track of each integer part of the results, until we get a fractional part that is equal to zero:
• #) multiplying = integer + fractional part;
• 1) 0.347 × 2 = 0 + 0.694;
• 2) 0.694 × 2 = 1 + 0.388;
• 3) 0.388 × 2 = 0 + 0.776;
• 4) 0.776 × 2 = 1 + 0.552;
• 5) 0.552 × 2 = 1 + 0.104;
• 6) 0.104 × 2 = 0 + 0.208;
• 7) 0.208 × 2 = 0 + 0.416;
• 8) 0.416 × 2 = 0 + 0.832;
• 9) 0.832 × 2 = 1 + 0.664;
• 10) 0.664 × 2 = 1 + 0.328;
• 11) 0.328 × 2 = 0 + 0.656;
• 12) 0.656 × 2 = 1 + 0.312;
• 13) 0.312 × 2 = 0 + 0.624;
• 14) 0.624 × 2 = 1 + 0.248;
• 15) 0.248 × 2 = 0 + 0.496;
• 16) 0.496 × 2 = 0 + 0.992;
• 17) 0.992 × 2 = 1 + 0.984;
• 18) 0.984 × 2 = 1 + 0.968;
• 19) 0.968 × 2 = 1 + 0.936;
• 20) 0.936 × 2 = 1 + 0.872;
• 21) 0.872 × 2 = 1 + 0.744;
• 22) 0.744 × 2 = 1 + 0.488;
• 23) 0.488 × 2 = 0 + 0.976;
• 24) 0.976 × 2 = 1 + 0.952;
• We didn't get any fractional part that was equal to zero. But we had enough iterations (over Mantissa limit = 23) and at least one integer part that was different from zero => FULL STOP (losing precision...).
• 5. Construct the base 2 representation of the fractional part of the number, by taking all the integer parts of the previous multiplying operations, starting from the top of the constructed list above:

0.347(10) = 0.0101 1000 1101 0100 1111 1101(2)

• 6. Summarizing - the positive number before normalization:

25.347(10) = 1 1001.0101 1000 1101 0100 1111 1101(2)

• 7. Normalize the binary representation of the number, shifting the decimal point 4 positions to the left so that only one non-zero digit stays to the left of the decimal point:

25.347(10) =
1 1001.0101 1000 1101 0100 1111 1101(2) =
1 1001.0101 1000 1101 0100 1111 1101(2) × 20 =
1.1001 0101 1000 1101 0100 1111 1101(2) × 24

• 8. Up to this moment, there are the following elements that would feed into the 32 bit single precision IEEE 754 binary floating point:

Sign: 1 (a negative number)

Mantissa (not-normalized): 1.1001 0101 1000 1101 0100 1111 1101

• 9. Adjust the exponent in 8 bit excess/bias notation and then convert it from decimal (base 10) to 8 bit binary (base 2), by using the same technique of repeatedly dividing it by 2, as already demonstrated above:

Exponent (adjusted) = Exponent (unadjusted) + 2(8-1) - 1 = (4 + 127)(10) = 131(10) =
1000 0011(2)

• 10. Normalize the mantissa, remove the leading (leftmost) bit, since it's allways '1' (and the decimal point) and adjust its length to 23 bits, by removing the excess bits from the right (losing precision...):

Mantissa (not-normalized): 1.1001 0101 1000 1101 0100 1111 1101

Mantissa (normalized): 100 1010 1100 0110 1010 0111

• Conclusion:

Sign (1 bit) = 1 (a negative number)

Exponent (8 bits) = 1000 0011

Mantissa (23 bits) = 100 1010 1100 0110 1010 0111