# Fixed Point Arithmetic

How do we cope with decimal numbers on a CPU that can’t natively work with them? We could just write our own floating point routines - which is what compilers such as *Z88DK* will do, but then there’s a big performance penalty.

What we need is a different way of representing numbers with decimal points. The slow part of dealing with floating point numbers is how the decimal point “floats” around in the number depending on the precision and value it is storing.

This is where *fixed point arithmetic* can be used.

## It’s like money

The best way to understand this is to think of money. We could represent money as a decimal quantity - £10.99, and as we all learn that just means you have 10 whole units of currency (in my cause UK pounds) and 99 sub-units of currency (in my case pence). However it’s money, so those 99 pennies are an actual real item, you could hold 99 of them. If you had 1 more you’d have 100 which is also £1.

So an alternate way to think of money is purely in its smallest unit - pennies (or whatever you use). Instead of calling it “one pound” you could call it “one hundred pennies”. Mathematically all you’re doing is multiplying everything by 100 to remove the decimal point.

Fixed point maths is the same.

Instead of using decimal, we have to use binary as that’s what computers work in. So instead of multiplying by 100, we can multiply by powers of 2.

With money it’s obvious what to do to remove the decimal point. Money is decimal, and it specifically only has 100 pennies in a pound. Nothing smaller - ignoring odd conventions like buying petrol - so it is always 100. With binary there is no rule, it is completely up to us as programmers to make this choice.

## The basic concept

The basic concept is to scale all numbers up by a certain amount to remove any decimal parts of the number. Then you can work with whole numbers only. What scale you use depends on the range of numbers needed for a particular situation. It can also depend on what the hardware is best at managing. A Z80 CPU for example can work with 16 bit values reasonably easily. And if you’re not trying to do complex accurate scientific calculations, the resolution of your machine’s screen is likely to be a factor as well.

But if you want a quick one to use, this explanation uses what is called **8.8 fixed** numbers. That is a 16 bit value where the upper 8 bits are the whole number and the lower 8 bits the fractional part. With an imaginary binary point in the middle.