# Floating point arithmetic example in Lecture 1

Hello,

I have run through the first lecture on the Computational Linear Algebra course and I was wondering why the example on accuracy and floating point arithmetic doesn’t use fractions?

The python code taken from the notebook is:

def f(x):
if x <= 1/2:
return 2 * x
if x > 1/2:
return 2*x - 1

x = 1/10
for i in range(80):
print(x)
x = f(x)

If you run it then it leads to an error due to the way that the computer stores numbers. However, if you ran this as fractions in python and the output is in fractions, does it still lead to the same error? (As you can tell, maths isn’t my strength, hence why I’m doing the course…)

Cheers.

Are you asking if this code works fine if you use the `fractions.Fraction` module? Then the answer is yes, it does not have the floating point precision issue (because it’s not floating point).

Or are you asking, since it works fine with fractions, why we don’t use fractions instead of floating point?

First off, not all numbers can be expressed as fractions (easy example: π or e). Now, this is more of a math issue than a computer issue. In the case of computers you could simply say, for example, that π is the fraction `31415926536 / 10000000000` and it would be close enough. This gives you ten digits behind the decimal point.

The problem is that using fractions is not very efficient. The reason we use floating point, despite its shortcomings, is that it allows for fast computation. And that they easily fit in 32, 64, or 128 bits of storage, yet cover a wide range of very small to very big numbers.

1 Like

Hi Matthijs, thanks for the reply - I was alluding to the second half of your answer but I probably wasn’t very explicit. Thanks for taking the time to explain though, you’ve already increased my knowledge of maths and computing !

Much appreciated.