[Solved] I’m confused with a output . So I’m expecting explaination For my output


char is a that, on most systems, takes 1 byte (8 bits). Your implementation seems to have char represent a signed type, however on other implementations it could be unsigned. The maximum value for a signed type is 2^(n-1)-1, where n is the number of bits. So the maximum value of char is 2^(8-1)-1=2^7-1=128-1=127. The minimum value is actually -2^(n-1). This means the minimun value is -128. When you add something that goes over the maximum value, it overflows and loops back to the minimum value. Hence, 127+1=-128 if you are doing char arithmetic.

You never use char for arithmetic. Use signed char or unsigned char instead. If you replace your char with unsigned char the program would print 128 as expected. Just note that the overflow can still happen (unsigned types have a range from 0 to 2^n-1, so unsigned char overflows if you add 1 to 255, giving you 0).

6

solved I’m confused with a output . So I’m expecting explaination For my output