In the computer, a computer byte is composed of 8 binary bits, and the computer word is composed of a number of bytes with the word length dependent on the computer. A user should read the manufacturer’s documentation to determine the word size for a particular computer system.
Therefore, a character, integer, or decimal number must be represented as a bit combination. There are some generally accepted practices for doing this:
Character: Almost all computers today use the ASCII standard for representing character in a byte. For example “A” is represented by 6510, “a” is represented by 9710, and “1” is represented by 4910. Most architecture textbooks will provide a table providing the ASCII representation of all character. In the past there were vendor-specific representations such as EBCDIC by IBM.
Integer: The computer and user must be able to store signed (temperature readings) and unsigned (memory addresses) integers, and be able to manipulate them and determine if an error has occurred in the manipulation process. Most computers use a twos complement representation for signed numbers and the magnitude of the number to represent unsigned numbers.
Decimal number: Decimal numbers are represented using a floating point representation with the most important one being the IEEE Standard 754, which provides both a 32-bit single and a 64-bit double precision representation with 8-bit and 11-bit exponents and 23-bit and 52-bit fractions, respectively. The IEEE standard has become widely accepted, and is used in most contemporary processors and arithmetic co processors.
The computer is a finite state machine, meaning that it is possible to represent a range of integers and a subset of the fractions. As a result, a user may attempt to perform operations that will result in numeric values outside of those that can be represented. This must be recognized and dealt with by the computer with adequate information provided to the user. Signed integer errors are called overflow errors, floating point operations can result in overflow or underflow errors, and unsigned integer errors are called carry errors.
The computers store numbers in twos complement or floating point representation because it requires less memory space. The operations are performed using these representations because the performance will always be better.
The computer architect must determine the algorithm to be used in performing an arithmetic operation and mechanism to be used to convert from one representation to another. Besides the movement of data from one location to another, the arithmetic operations are the most commonly performed operations; as a result, these arithmetic algorithms will significantly influence the performance of the computer. The ALU and Shifter perform most of the arithmetic operations on the data path.