[Solved] Converting decimal to binary in assembly x86


Your code could be simplified a lot (and simpler code usually means that it’s simpler to find any mistakes). Here’s an outline of an easier way of doing this (I’ll leave it up to you to implement it in x86 assembly):

void to_bin_string(unsigned input) {
  char output[33];

  // The number of binary digits needed to represent the input number, if we
  // exclude leading zeroes.
  unsigned digits = highest_set_bit(input) + 1;

  // Shift the input so that the most significant set bit is in bit 31.
  input <<= (32 - digits);

  for (unsigned i = 0; i < digits; i++) {
    // If the current msb is set, store a '1' in the output buffer. Otherwise
    // store a '0'.
    output[i] = (input & 0x80000000) ? '1' : '0';
    // Move the second-most significant bit into the msb position.
    input <<= 1;
  }
  output[digits] = '\0';
}

You can use the BSR instruction to compute highest_set_bit on x86 CPUs. The & operation can be done with an AND or a TEST, and << would be SHL.

Also keep in mind that you typically should be using byte operations when accessing ASCII strings (i.e. not things like mov %edx, buf2(,%edi,)).

2

solved Converting decimal to binary in assembly x86