Hello,

I have run through the first lecture on the Computational Linear Algebra course and I was wondering why the example on accuracy and floating point arithmetic doesn’t use fractions?

The python code taken from the notebook is:

def f(x):

if x <= 1/2:

return 2 * x

if x > 1/2:

return 2*x - 1

x = 1/10

for i in range(80):

print(x)

x = f(x)

If you run it then it leads to an error due to the way that the computer stores numbers. However, if you ran this as fractions in python and the output is in fractions, does it still lead to the same error? (As you can tell, maths isn’t my strength, hence why I’m doing the course…)

Cheers.