[Solved] What will be the output in C? [duplicate]


Warning, long winded answer ahead. Edited to reference the C standard and to be clearer and more concise with respect to the question being asked.

The correct answer for why you have 32 has been given a few times. Explaining the math using modular arithmetic is completely correct but might make it a little harder to grasp intuitively if you are new to programming. So, in addition to the existing correct answers, here’s a visualization.

Your char is an 8 bit type, so it is made up of a series of 8 zeros and ones.

Looking at the raw bits in binary, when unsigned (let’s leave signed types out of it for a moment as it will just confuse the point) your variable ‘c’ can take on values in the following range:

00000000 -> 0
11111111 -> 255

Now, c*200 = 800. This is of course larger than 255. In binary 800 looks like:

00000011 00100000

To represent this in memory you need at least 10 bits (see the two 1’s in the upper byte). As an aside, the leading zeros don’t explicitly need to be stored since they have no effect on the number. However the next largest data type will be 16 bits and it’s easier to show consistently sized groupings of bits anyway, so there it is.

Since the char type is limited to 8 bits and cannot represent the result, there needs to be a conversion. ISO/IEC 9899:1999 section 6.3.1.3 says:

6.3.1.3 Signed and unsigned integers

1 When a value with integer type is converted to another integer type other than _Bool, if
the value can be represented by the new type, it is unchanged.

2 Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or
subtracting one more than the maximum value that can be represented in the new type
until the value is in the range of the new type.

3 Otherwise, the new type is signed and the value cannot be represented in it; either the
result is implementation-defined or an implementation-defined signal is raised.

So, if your new type is unsigned, following rule #2 if we subtract one more than the max value of the new type (256) from 800 we eventually end up in the range of the new type with 32. This behaviour also happens to effectively truncate the result, as you can see the higher bits which could not be represented have been discarded.

00100000 -> 32

The existing answers explain using the modulo operation, where 800 % 256 = 32. This is simply math that gives the remainder of a division operation. When we divide 800 by 256 we get 3 (because 256 fits into 800 at most three times) plus a remainder of 32. This is essentially the same as applying rule #2 here.

Hopefully this clarifies why you get a result of 32. However, as has been correctly pointed out, if the destination type is signed we’re looking at rule #3, which says the behaviour is implementation-defined. Since the standard also says that the plain char type you are using may be signed or unsigned (and that this is implementation-defined) your particular case is then implementation-defined. However, in practice you will typically see the same behaviour where you lose the higher bits and hence you will still generally get 32.

Extending this a bit, if you were to have a signed 8-bit destination type, and you were to run your code with c=c*250 instead, you would have:

00000011 11101000 -> 1000

and you will probably find that after the conversion to the smaller signed type the result is similarly truncated as:

11101000

which in a signed type is interpreted as -24 for most systems which use two’s complement. Indeed this is what happens when I run this on gcc, but again this is not guaranteed by the language itself.

4

solved What will be the output in C? [duplicate]