Continued Fractions
Licensed under Creative Commons: other posts

math

Continued fractions are a beautiful representation of real numbers, but they have some problems as well.

# A Quick Overview

Suppose I have a real number x between 0 and 1. Then the inverse of x is a real number greater than 1, so we can write x = y + 1 ÷ z, where y is a natural number and z is a real number between 0 and 1. That means we can do the same thing to z and keep going forever (for non-rational numbers) or until we reach an integer (for rational numbers).

365.242 = 365 +
1
4 +
1
7 +
1
1 +
1
1 +
1
3 +
 1 2

Because the full form of a continued fraction is cumbersome, it is traditional to write them in an abbreviated format: 365.242 = [365; 4, 7, 1, 1, 3, 2].

Continued fractions are natural, in that they contain no arbitrary constants (such as the 10 in decimal expansion). They are also useful for approximations, as truncating them is provably the “‍best‍” approximation, in a sense I will not define here, though I may come back to it in a later post. The Internet has a fair amount of material on these topics.

# Continued Fraction Computation

algorithm

## Gosper’s Algorithms

It is possible to perform arithmetic directly on continued fractions. It is possible to do this with infinite continued fractions because, unlike traditional decimal or binary arithmetic, Since authoring this post I’ve begun to develop most-significant-first decimal/binary arithmetic; see this post. continued fraction arithmetic yields the most significant number first. Mark Jason Dominus has a nice explanation of how this works; you can also look at Bill Gosper’s original algorithms, documented as an appendix to

Beeler, M., Gosper, R.W., and Schroeppel, R. HAKMEM. MIT AI Memo 239, Feb. 29, 1972.

## Non-termination

The problem with this class of algorithms is it might not ever produce any information at all. For example, the square root of 2 is [1; 2, 2, 2, …] continued forever. This is nicely computable; combining Newton’s method with Gosper’s algorithm will pump out these 2s quite quickly. It is probably possible to compute it directly as well, though I personally am not acquainted with that algorithm.

The problem comes in the reverse. What is [1; 2, 2, …] × [1; 2, 2, …]? Gosper’s multiplication scheme will run forever without ever returning even a single number, because whether the first number is a 2 or a 1 depends on all of the infinite precision in the operands.

## The Source of Non-Termination

I like continued fractions for their elegance, their finite closure under a wider set of operations than decimal expansion, and their most-significant-part-first arithmetic. But they run for unpredictable and possibly non-terminating time before providing any information at all.

I propose that a useful real-number representation would be equivalent to intervals that could be narrowed by applying a computation. Continued fractions are such a narrowing interval: truncating at an odd number of values gives a lower bound, while truncating at an even number gives an upper bound. The problem is that not every answer gives such a narrowing interval because not all continued fractions continue. [1; 2, 2, …] is a narrowing interval, but , it’s square, must appear in an all-or-nothing way.

All of which begs the question, what representation should be used? It clearly will not be canonical, since we want to be able to represent two as both an finite integer and an infinitely-narrowing interval converging on two. Beyond that, it isn’t clear. Something to think about.

# The Other Solution

The other answer to real numbers is to fix a set of operations a prior and design a number representation that remains finite under those operations. Decimal expansion does this under +, –, and ×; fractions under +, –, ×, and ÷. There are probably representations that can throw in root extraction, exponentiation, maybe even e and π and with them (via Euler’s identity) all of trigonometry.

The problem with any such solution is it won’t allow iterative numerical computation. That means that whatever set of operations we pick to start with, we are stuck with just that set. Hence the desire for intermediate approximate solutions.