The following article was printed in February 1980 of the magazine „BYTE".
A base article about Binary to Binary-Coded-Decimal Conversion.

A Fast, Multibyte Binary to Binary-Coded-Decimal Conversion Routine


Michael R McQuade
School of Electrical Engineering
Van Leer Building
Georgia Institute of Technology
Atlanla GA 30332

A problem which has confronted users of small computer systems over the years has been the incompatibility of the number representation required by output devices and that used for internal processing. Output devices used by the small systems need to receive binary-coded-decimal (BCD) or ASCII (American Standard Code for Information Interchange) data representations, while the microprocessor is most efficient when handling a straight binary number. Several solutions to the problem exist, and as would be expected, each has its own advantages and disadvantages.
Some users choose to initially store all numbers in their binary-coded-decimal representation and do all subsequent processing in this format. This has the advantage of easy and quick conversion of the numbers into the required output format. At worst, the binary-coded-decimal represented number must be converted to an ASCII format. This requires attaching a fixed 4-bit prefix to each binary-coded-decimal digit.
A disadvantage associated with this approach is that arithmetic operations take longer to perform, since the results must be decimally adjusted after each operation. Also, more memory is required to store the binary-coded-decimal form of the number than is required for its straight binary equivalent. A direct result of this increased memory requirement is the need to perform more memory-access operations to transfer the numbers into and out of the processor. Memory accesses are a very time-consuming operation.
For the users who choose a straight, binary-number representation for internal storage, the advantages of efficient memory utilization and straightforward arithmetic are gained. The question of how to convert the numbers to an acceptable output format for the display device still remains to be answered. This question basically reduces down to converting the binary numbers to binary-coded-decimal form.

About the Author
Mike McQuade is currently working towards a PhD degree in the Computer Architecture Laboratory at the School of Electrical Engineering at the Georgia Institute of Technology. He has instructed computer courses there, and has taught short microprocessor courses for the Institute of Eectrical and Electronics Engineers at both national and regional levels.

Methode of Conversion

There are three basic approaches in wide use. The first approach is to count the binary number down to 0 while incrementing its binary-coded-decimal counterpart up from 0 using modulo-10 counting. Modulo-10 counting performs a decimal-adjust operation after each incremental addition. This method is conceptually easy and requires a minimum of program code if the microprocessor has a decimal-adjust instruction. The counting method can, however, be very time-consuming if large numbers are being converted. For some applications this time penalty would be irrelevant (eg: if the output device is very slow when compared to the processor's cycle time). For a slow output device, any time savings realized by using a faster conversion routine usually has to be wasted in a wait loop.
The second approach is to use some form of table lookup routine. Assuming that the table is extensive enough, the lookup technique performs a very fast conversion. The drawback to this technique is that as the size of the numbers being handled gets larger, either a great deal of memory must be dedicated to the table, or some type of divide-with-remainder scheme must be implemented. The division scheme allows the table size to remain small, bu it causes the conversion time to increase. As was pointed out earlier, this may not be important. If the processor being used does not have a decimal-adjust instruction and the numbers encountered are not too large, this second method is very popular.
The third approach in converting from straight binary to binary-coded decimal is to use an algorithm based on the structure of the binary nurnber system. Given the binary number:

bn bn-1 bn-2 ... b2 b1 b0

where each of the bs can represent either a 1 or a 0, and bn is the most significant bit, it can be expanded as:
bn × 2n + bn-1 × 2n-1 + ... + b1 × 21 + b0× 20
(Form I).
Form I is not conducive to an iterative-type binary to binary-coded decimal conversion routine, but can be rewritten as:
(...((bn × 2) + bn-1) × 2 + ... + b1) × 2 + b0
(Form II).
Form II contains only the decimal numbers 0, 1, and 2, which have the same representations in either straight binary ar binary-coded decimal. Straight binary and binary-coded-decimal representations of a number differ only for numbers greater than 9. While straight binary adheres strictly to position weighting in powers of 2, binary-coded decimal treats each decimal digit of the number independently and represents if as a 4-bit straight binary number.
If Form II is implemented using binary-coded-decimal arithmetic (performing a decimal adjust after each addition), the final result will be in binary-coded-decimal representation. Form II lends itself to an iterative-type implementation which allows it to be coded to easily accommodate any size number.
CarryAuxiliary CarryCorrection Factor
0010011010
0110100000
1011111010
1100000000
Table 1: Correction factors in binary for the binary to trinary-caded-decimal (BCD) conversion algorithm.
Much has been said about performing a decimal-adjust operation when operating on numbers in the binary-coded-decimal format. When two binary-coded-decimal numbers are added by the processor's straight binary-adding accumulator, the result is not in binary-coded-decimal form. If is necessary to perform one more operation after each addition to correct for the fact that the processor's arithmetic logic is designed to add straight binary numbers. This extra operation is the decimal adjust. Many of the microprocessors on the market today have the decimal-adjust operation contained in their instruction sets.
If the processor being used does not contain a decimal-adjust instruction, it is still possible to perforrn a decimal-adjust operation. What must be done is to allow for the fact that a binary-coded-decimal number uses only ten of the sixteen possibie 4-bit cornbinations for each digit. If two binary-coded-decimal numbers are added together, and the least significant 4 bits of the result have a value greater than 9, then 6 must be added to the result. It is necessary to add 6 to skip over the six unallowed BCD bit combinations. The next 4 bits of the result are then tested, and 6 is added to them if necessary. This is repeated across the entire result.

A Better Method

The above method works in theory but is rather awkvvard to program. Let us examine a method based on the above theory which lends itself to straightforward programming. The method will be for 8-bit processors since they are the most popular. First it is necessary to keep track not only of the carry out of the eighth bit position, but also the carry from the fourth to fifth bit position. This second carry will be referred to as the auxiliary carry.

(1) Add the binary number 01100110 to the first number.
(2) Add the second number to the result generated in step 1. Keep track of both the carry and auxiliary carry from this addition. The carry generated here is the true carry to the next higher digit.
(3) Based on the carry and auxiliary carry generated in step 2, add one of the correction factors shown in tabie 1 to the result of step 2.

The result has now been decimally adjusted.
The program shown in listing 1 and the flowchart shown in figure 1 provide an implementation of Form II using binary-coded-decimal arithmetic for the Intel 8080 microprocessor. It uses the decimal-adjust (DAA) instruction in the 8080's instruction set. A simple program shown in listing 2 converts data from binary-coded dedrnal to ASCII representation. The conversion from binary-coded-decimal to ASCII entails taking each of the two 4-bit, binary-coded-decimal digits, putting them in a byte, and appending the binary prefix 0011. Both programs are coded as subroutines, since these forms are usually more convenient to include in larger programs.
Listing 1: The multibyte binary to binary-coded-dedmal (BCD) conversion algarithm coded as a suhroutine for the 8080 microprocessor.
[8080 and Z80 source]
[Flowchart]
Figure 1: Flowchart of the algorithm for the binary to binary-coded-decimal (BCD) conversion subroutine.
The binary-to-binary-coded-decimal subroutine of listing 1 requires contiguous memory locations to hold the binary-coded-decimal result. The address of the memory location for the low-order byte of the binary-coded-decimal number has been labeled BCDNL (binary-coded-decimal number location) in the subroutine. The number is ordered upwards in memory. Register E must contain the number of bytes in the binary-coded-decimal number when the subroutine is called. If more bytes are specified than are needed, the extra will be filled with leading zeros.
The other parameters which must be passed to the subroutine are the number of bytes in the binary number and the address of the low-order byte of the binary number. The number of bytes in the binary number is to be in register D, while the address of the low-order byte is in register pair HL. The binary number is assumed to be stored in memory using the same convention as the binary-coded-decimal number. The more significant bytes are found at increasing memory addresses.
By having register pair HL point to the binary number, the routine can be used to convert all binary numbers required by the user's program without moving them to a specific location. All results are put in the same location, since this is temporary storage needed only until the number is sent to the display device.
Listing 2: A subroutine to convert a single-byte, 2-digit, binary-coded-decimal number to two single-byte ASCII characters, coded for the 8080 microprocessor.
[8080 and Z80 source]
[Complete 8080 and Z80 source]
The other parameters which must be passed to the subroutine are the number of bytes in the binary number and the address of the low-order byte of the binary number. The number of bytes in the binary number is to be in register D, while the address of the low-order byte is in register pair HL. The binary number is assumed to be stored in memory using the same convention as the binary-coded-decimal number. The more significant bytes are found at increasing memory addresses.
By having register pair HL point to the binary number, the routine can be used to convert all binary numbers required by the user's program without moving them to a specific location. All results are put in the same location, since this is temporary storage needed only until the number is sent to the display device.
The binary to binary-coded-decimal conversion subroutine provided can handle binary numbers of any length up to and including 31 bytes. This corresponds to a decimal number in excess of 4.5 × 1075 with a full 75 significant digits. This should be adequate to handle any physical quantity encountered. To establish a reference, it is only about 1.5 × 1021 angstroms from the earth to the sun. (An angstrom is one ten-billionth of a meter, that is 1/109, and is normally used to measure the wavelength of light.)
The routines provided have been tested using a high-speed line printer as an output device. The routines were fast enough to allow the line printer not to wait when being sent a stream of 6-digit numbers. While the routines have been tested and were fast enough for the desired applications, an extensive effort was not made to eliminate every unneeded processor cycle. The object code provided in listings 1 and 2 will also execute on an Intel 8085 or a Zilog Z80 microprocessor.

Scanned by Werner Cirsovius
April 2012
© BYTE Publications Inc.