# Convert the Number 101 011 110 100 094 to 32 Bit Single Precision IEEE 754 Binary Floating Point Representation Standard, From a Base 10 Decimal System Number. Detailed Explanations

## Number 101 011 110 100 094(10) converted and written in 32 bit single precision IEEE 754 binary floating point representation (1 bit for sign, 8 bits for exponent, 23 bits for mantissa)

### 1. Divide the number repeatedly by 2.

#### We stop when we get a quotient that is equal to zero.

• division = quotient + remainder;
• 101 011 110 100 094 ÷ 2 = 50 505 555 050 047 + 0;
• 50 505 555 050 047 ÷ 2 = 25 252 777 525 023 + 1;
• 25 252 777 525 023 ÷ 2 = 12 626 388 762 511 + 1;
• 12 626 388 762 511 ÷ 2 = 6 313 194 381 255 + 1;
• 6 313 194 381 255 ÷ 2 = 3 156 597 190 627 + 1;
• 3 156 597 190 627 ÷ 2 = 1 578 298 595 313 + 1;
• 1 578 298 595 313 ÷ 2 = 789 149 297 656 + 1;
• 789 149 297 656 ÷ 2 = 394 574 648 828 + 0;
• 394 574 648 828 ÷ 2 = 197 287 324 414 + 0;
• 197 287 324 414 ÷ 2 = 98 643 662 207 + 0;
• 98 643 662 207 ÷ 2 = 49 321 831 103 + 1;
• 49 321 831 103 ÷ 2 = 24 660 915 551 + 1;
• 24 660 915 551 ÷ 2 = 12 330 457 775 + 1;
• 12 330 457 775 ÷ 2 = 6 165 228 887 + 1;
• 6 165 228 887 ÷ 2 = 3 082 614 443 + 1;
• 3 082 614 443 ÷ 2 = 1 541 307 221 + 1;
• 1 541 307 221 ÷ 2 = 770 653 610 + 1;
• 770 653 610 ÷ 2 = 385 326 805 + 0;
• 385 326 805 ÷ 2 = 192 663 402 + 1;
• 192 663 402 ÷ 2 = 96 331 701 + 0;
• 96 331 701 ÷ 2 = 48 165 850 + 1;
• 48 165 850 ÷ 2 = 24 082 925 + 0;
• 24 082 925 ÷ 2 = 12 041 462 + 1;
• 12 041 462 ÷ 2 = 6 020 731 + 0;
• 6 020 731 ÷ 2 = 3 010 365 + 1;
• 3 010 365 ÷ 2 = 1 505 182 + 1;
• 1 505 182 ÷ 2 = 752 591 + 0;
• 752 591 ÷ 2 = 376 295 + 1;
• 376 295 ÷ 2 = 188 147 + 1;
• 188 147 ÷ 2 = 94 073 + 1;
• 94 073 ÷ 2 = 47 036 + 1;
• 47 036 ÷ 2 = 23 518 + 0;
• 23 518 ÷ 2 = 11 759 + 0;
• 11 759 ÷ 2 = 5 879 + 1;
• 5 879 ÷ 2 = 2 939 + 1;
• 2 939 ÷ 2 = 1 469 + 1;
• 1 469 ÷ 2 = 734 + 1;
• 734 ÷ 2 = 367 + 0;
• 367 ÷ 2 = 183 + 1;
• 183 ÷ 2 = 91 + 1;
• 91 ÷ 2 = 45 + 1;
• 45 ÷ 2 = 22 + 1;
• 22 ÷ 2 = 11 + 0;
• 11 ÷ 2 = 5 + 1;
• 5 ÷ 2 = 2 + 1;
• 2 ÷ 2 = 1 + 0;
• 1 ÷ 2 = 0 + 1;

### 6. Convert the adjusted exponent from the decimal (base 10) to 8 bit binary.

#### Use the same technique of repeatedly dividing by 2:

• division = quotient + remainder;
• 173 ÷ 2 = 86 + 1;
• 86 ÷ 2 = 43 + 0;
• 43 ÷ 2 = 21 + 1;
• 21 ÷ 2 = 10 + 1;
• 10 ÷ 2 = 5 + 0;
• 5 ÷ 2 = 2 + 1;
• 2 ÷ 2 = 1 + 0;
• 1 ÷ 2 = 0 + 1;

## The base ten decimal number 101 011 110 100 094 converted and written in 32 bit single precision IEEE 754 binary floating point representation: 0 - 1010 1101 - 011 0111 1011 1100 1111 0110

(32 bits IEEE 754)

• 0

31

• 1

30
• 0

29
• 1

28
• 0

27
• 1

26
• 1

25
• 0

24
• 1

23

• 0

22
• 1

21
• 1

20
• 0

19
• 1

18
• 1

17
• 1

16
• 1

15
• 0

14
• 1

13
• 1

12
• 1

11
• 1

10
• 0

9
• 0

8
• 1

7
• 1

6
• 1

5
• 1

4
• 0

3
• 1

2
• 1

1
• 0

0

## How to convert decimal numbers from base ten to 32 bit single precision IEEE 754 binary floating point standard

### Follow the steps below to convert a base 10 decimal number to 32 bit single precision IEEE 754 binary floating point:

• 1. If the number to be converted is negative, start with its the positive version.
• 2. First convert the integer part. Divide repeatedly by 2 the base ten positive representation of the integer number that is to be converted to binary, until we get a quotient that is equal to zero, keeping track of each remainder.
• 3. Construct the base 2 representation of the positive integer part of the number, by taking all the remainders of the previous dividing operations, starting from the bottom of the list constructed above. Thus, the last remainder of the divisions becomes the first symbol (the leftmost) of the base two number, while the first remainder becomes the last symbol (the rightmost).
• 4. Then convert the fractional part. Multiply the number repeatedly by 2, until we get a fractional part that is equal to zero, keeping track of each integer part of the results.
• 5. Construct the base 2 representation of the fractional part of the number by taking all the integer parts of the previous multiplying operations, starting from the top of the constructed list above (they should appear in the binary representation, from left to right, in the order they have been calculated).
• 6. Normalize the binary representation of the number, by shifting the decimal point (or if you prefer, the decimal mark) "n" positions either to the left or to the right, so that only one non zero digit remains to the left of the decimal point.
• 7. Adjust the exponent in 8 bit excess/bias notation and then convert it from decimal (base 10) to 8 bit binary, by using the same technique of repeatedly dividing by 2, as shown above:
• 8. Normalize mantissa, remove the leading (leftmost) bit, since it's allways '1' (and the decimal sign if the case) and adjust its length to 23 bits, either by removing the excess bits from the right (losing precision...) or by adding extra '0' bits to the right.
• 9. Sign (it takes 1 bit) is either 1 for a negative or 0 for a positive number.

### Example: convert the negative number -25.347 from decimal system (base ten) to 32 bit single precision IEEE 754 binary floating point:

|-25.347| = 25.347

• 2. First convert the integer part, 25. Divide it repeatedly by 2, keeping track of each remainder, until we get a quotient that is equal to zero:
• division = quotient + remainder;
• 25 ÷ 2 = 12 + 1;
• 12 ÷ 2 = 6 + 0;
• 6 ÷ 2 = 3 + 0;
• 3 ÷ 2 = 1 + 1;
• 1 ÷ 2 = 0 + 1;
• We have encountered a quotient that is ZERO => FULL STOP
• 3. Construct the base 2 representation of the integer part of the number by taking all the remainders of the previous dividing operations, starting from the bottom of the list constructed above:

25(10) = 1 1001(2)

• 4. Then convert the fractional part, 0.347. Multiply repeatedly by 2, keeping track of each integer part of the results, until we get a fractional part that is equal to zero:
• #) multiplying = integer + fractional part;
• 1) 0.347 × 2 = 0 + 0.694;
• 2) 0.694 × 2 = 1 + 0.388;
• 3) 0.388 × 2 = 0 + 0.776;
• 4) 0.776 × 2 = 1 + 0.552;
• 5) 0.552 × 2 = 1 + 0.104;
• 6) 0.104 × 2 = 0 + 0.208;
• 7) 0.208 × 2 = 0 + 0.416;
• 8) 0.416 × 2 = 0 + 0.832;
• 9) 0.832 × 2 = 1 + 0.664;
• 10) 0.664 × 2 = 1 + 0.328;
• 11) 0.328 × 2 = 0 + 0.656;
• 12) 0.656 × 2 = 1 + 0.312;
• 13) 0.312 × 2 = 0 + 0.624;
• 14) 0.624 × 2 = 1 + 0.248;
• 15) 0.248 × 2 = 0 + 0.496;
• 16) 0.496 × 2 = 0 + 0.992;
• 17) 0.992 × 2 = 1 + 0.984;
• 18) 0.984 × 2 = 1 + 0.968;
• 19) 0.968 × 2 = 1 + 0.936;
• 20) 0.936 × 2 = 1 + 0.872;
• 21) 0.872 × 2 = 1 + 0.744;
• 22) 0.744 × 2 = 1 + 0.488;
• 23) 0.488 × 2 = 0 + 0.976;
• 24) 0.976 × 2 = 1 + 0.952;
• We didn't get any fractional part that was equal to zero. But we had enough iterations (over Mantissa limit = 23) and at least one integer part that was different from zero => FULL STOP (losing precision...).
• 5. Construct the base 2 representation of the fractional part of the number, by taking all the integer parts of the previous multiplying operations, starting from the top of the constructed list above:

0.347(10) = 0.0101 1000 1101 0100 1111 1101(2)

• 6. Summarizing - the positive number before normalization:

25.347(10) = 1 1001.0101 1000 1101 0100 1111 1101(2)

• 7. Normalize the binary representation of the number, shifting the decimal point 4 positions to the left so that only one non-zero digit stays to the left of the decimal point:

25.347(10) =
1 1001.0101 1000 1101 0100 1111 1101(2) =
1 1001.0101 1000 1101 0100 1111 1101(2) × 20 =
1.1001 0101 1000 1101 0100 1111 1101(2) × 24

• 8. Up to this moment, there are the following elements that would feed into the 32 bit single precision IEEE 754 binary floating point:

Sign: 1 (a negative number)

Mantissa (not-normalized): 1.1001 0101 1000 1101 0100 1111 1101

• 9. Adjust the exponent in 8 bit excess/bias notation and then convert it from decimal (base 10) to 8 bit binary (base 2), by using the same technique of repeatedly dividing it by 2, as already demonstrated above:

Exponent (adjusted) = Exponent (unadjusted) + 2(8-1) - 1 = (4 + 127)(10) = 131(10) =
1000 0011(2)

• 10. Normalize the mantissa, remove the leading (leftmost) bit, since it's allways '1' (and the decimal point) and adjust its length to 23 bits, by removing the excess bits from the right (losing precision...):

Mantissa (not-normalized): 1.1001 0101 1000 1101 0100 1111 1101

Mantissa (normalized): 100 1010 1100 0110 1010 0111

• Conclusion:

Sign (1 bit) = 1 (a negative number)

Exponent (8 bits) = 1000 0011

Mantissa (23 bits) = 100 1010 1100 0110 1010 0111