[Solved] What will be output of the below C code in 32-bit machine..? [closed]


(int)(float)(char) i; is not a definition of i. It merely is using the value for nothing.

#include <stdio.h>
double i;
int main(void) {
    i; // use i for nothing
    (int)i; // convert the value of i to integer, than use that value for nothing
    (int)(float)i; // convert to float, then to int, then use for nothing
    (int)(float)(char)i; // convert char, then to float, then to int, then use for nothing
    printf("sizeof i is %d\n", (int)sizeof i);
    char i; // define a new i (and hide the previous one) of type char
    printf("sizeof i is %d\n", (int)sizeof i);
}

3

solved What will be output of the below C code in 32-bit machine..? [closed]