An integer variable may be defined to be of type short, int, or long. The only difference
is that an int uses more or at least the same
number of bytes as a short, and a long uses more or at least the same number of bytes as an int. For example, on the author’s PC, a short uses 2 bytes, an int also 2 bytes, and a long 4 bytes.
short
age = 20;
int
salary = 65000;
long
price = 4500000;
By default, an integer variable is assumed to be signed (i.e., have a
signed representation so that it can assume positive as well as negative
values). However, an integer can be defined to be unsigned by using the keyword
unsigned in its definition. The keyword signed is also allowed but is redundant.
unsigned
short age = 20;
unsigned
int salary = 65000;
unsigned
long price = 4500000;
A literal integer (e.g., 1984) is always assumed to be of type int, unless it has an L or l suffix, in which case it is treated as a long. Also, a literal integer can be specified to be unsigned using the suffix U or u. For example:
1984L
1984l 1984U 1984u 1984LU 1984ul
Literal integers can be expressed in decimal, octal, and hexadecimal
notations. The decimal notation is the one we have been using so far. An
integer is taken to be octal if it is preceded by a zero (0), and hexadecimal if it is preceded by a 0x or 0X. For example:
92 //
decimal
0134 //
equivalent octal
0x5C //
equivalent hexadecimal
Octal numbers use the base 8, and can therefore only use the digits 0-7. Hexadecimal numbers use the base 16, and therefore use the letter A-F (or a-f) to represent, respectively, 10-15. Octal and hexadecimal numbers are calculated as follows:
0134 = 1 × 82 + 3 × 81 + 4 × 80 = 64 + 24 + 4 = 92
0x5C = 5 × 161 + 12 × 160 = 80 + 12 = 92
Integer data type is used to store a value of numeric type. Memory size of a variable of integer data type is dependent on Operating System, For example size of an integer data type in C in a 32 bit computer is 4 bytes whereas size of integer data type in 16 bit computer is 2 bytes. learn C programming
ReplyDelete