There are four elementary integer types plus an enumerated integer type. The first, and smallest integer type is called: char. It is always of size one byte. And all 'C' addresses and pointers must be to the byte level.
This char type is designed to be able to hold all possible characters, (which is how it got its name) in the full alphabet (ASCII or EBCEDIC) as positive numbers. It can usually hold all of the extended codes too. Normally, a variable of type "char" is by default "signed", although it is possible to change this default in most compilers to unsigned. It is for this reason that either the signed or unsigned modifier may be be used.
When a byte is of size 8 bits, variables of type signed char can hold integer values in the range from: -128 to +127, while unsigned char has a range of: 0 to 255. In both cases the bit patterns for the range: 0 to 127 is identical. (See Bits and Bytes).
The three remaining basic integer types are:
short int int long int
As mentioned before, variables of these types have a size (in bytes) that is greater than that for char. But, the actual size is not defined by the language. The size is what is natural for the target computer. Further, it may turn out that the target computer only has one or two but not three different sizes of integers that the machine instructions operate on. In this case two or all three of these types will have the same size.
The size in bytes, determines the range of values that can be taken on
by variables of these types. This size information can be determined
in at least three ways. One can carefully write a portable program to
explore the size, one can use the sizeof() operator, or one can
look at the manifest constants in the standard header file:
The following relationships are guaranteed to hold:
sizeof(short int) <= sizeof(int) <= sizeof(long int)
These three types can also have the type modifiers: unsigned or signed applied to them. If a declaration of for a variable contains only the keyword unsigned or signed then implicitly it is of type int. Similarly, a declaration can consist of only the keyword short or only the keyword long meaning short int and long int.
In general, the default type of anything is int. Thus a function return value or variable not explicitly typed is of type int, e.g.:
fun(param); /* both fun() and param are of type int */
But deafulting types is considered bad programming practice. When arithmetic is done in any of the integer types and the value of an expression exceeds the limits of the type, then an overflow or underflow occurs. Such a condition is not an error. An underflow or overflow is the defined behavior.
Lastly, there is the enumerated integer type, which is declared as in the following format:
enum tag { enum_list } variable_list;
Where: enum is a the enumeration keyword, tag is an optional identifier, enum_list is list of comma separated identifiers which are identifiers that take on arbitrary constant values, but any or none of them may be initialized (red=5), variable_list is zero or one or more comma separated identifiers which name the enumerated variables being declared
Example:
enum day_of_week { Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday } birthday, holiday;
In actual practice enumerated variables and the manifest constants created in 'enum_list' are all treated identically to int's with no further checking of types, tags, or bounds. This diminishes its use.
The unsigned or signed keywords cannot be used with this type.
It is possible to force one of the manifest constants to have a particular value, if you wish, by initializing it as in the following:
enum day_of_week { Sunday=7, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday=965 } funday;