<C Primer Plus> 1 ~ 6 章

原创
2017/08/01 22:52
阅读数 27

============================================================================

Note Why Input and Output Are Not Built In

Perhaps you are wondering why facilities as basic as input and output aren’t included auto- matically. One answer is that not all programs use this I/O (input/output) package, and part of the C philosophy is to avoid carrying unnecessary weight. This principle of economic use of resources makes C popular for embedded programming—for example, writing code for a chip that controls an automotive fuel system or a Blu-ray player.  

============================================================================

Syntax Errors  VS Semantic Errors 

Syntax Errors: You commit a syntax error when you don’t follow C’s rules. It’s analogous to a grammatical error in English. For instance, consider the following sentence: Bugs frustrate be can. This sentence uses valid English words but doesn’t follow the rules for word order, and it doesn’t have quite the right words, anyway. C syntax errors use valid C symbols in the wrong places. 

Semantic Errors: Semantic errors are errors in meaning. For example, consider the following sentence: Scornful derivatives sing greenly. The syntax is fine because adjectives, nouns, verbs, and adverbs are in the right places, but the sentence doesn’t mean anything. In C, you commit a semantic error when you follow the rules of C correctly but to an incorrect end.  

============================================================================

Bits, Bytes, and Words

The terms bit, byte, and word can be used to describe units of computer data or to describe units of computer memory. We’ll concentrate on the second usage here.

The smallest unit of memory is called a bit. It can hold one of two values: 0 or 1. (Or you can say that the bit is set to “off” or “on.”) You can’t store much information in one bit, but a com- puter has a tremendous stock of them. The bit is the basic building block of computer memory.

The byte is the usual unit of computer memory. For nearly all machines, a byte is 8 bits, and that is the standard definition, at least when used to measure storage. (The C language, how- ever, has a different definition, as discussed in the “Using Characters: Type char" section later in this chapter.) Because each bit can be either 0 or 1, there are 256 (that’s 2 times itself 8 times) possible bit patterns of 0s and 1s that can fit in an 8-bit byte. These patterns can be used, for example, to represent the integers from 0 to 255 or to represent a set of characters. Representation can be accomplished with binary code, which uses (conveniently enough) just 0s and 1s to represent numbers. (Chapter 15, “Bit Fiddling,” discusses binary code, but you can read through the introductory material of that chapter now if you like.)

A word is the natural unit of memory for a given computer design. For 8-bit microcomputers, such as the original Apples, a word is just 8 bits. Since then, personal computers moved up to 16-bit words, 32-bit words, and, at the present, 64-bit words. Larger word sizes enable faster transfer of data and allow more memory to be accessed. 

============================================================================

Integer Overflow 

/* toobig.c-exceeds maximum int size on our system */ 
#include <stdio.h>
int main(void)
{

    int i = 2147483647; 
    unsigned int j = 4294967295;

    printf("%d %d %d\n", i, i+1, i+2); 

    printf("%u %u %u\n", j, j+1, j+2);

    return 0;

}

Here is the result for our system:

2147483647    -2147483648    -2147483647

4294967295    0    1 

The behavior described here is mandated by the rules of C for unsigned types. The standard doesn’t define how signed types should behave. The behavior shown here is typical, but you could encounter something different 

============================================================================

Why C automatically expands a type short value to a type int value when it's passed as an argument to a function?

The answer to this question is that the int type is intended to be the integer size that the computer handles most efficiently. So, on a computer for which short and int are different sizes, it may be faster to pass the value as an int. 

============================================================================

 

 

 

 

展开阅读全文
打赏
0
0 收藏
分享
加载中
更多评论
打赏
0 评论
0 收藏
0
分享
返回顶部
顶部