Remez Exchange Algorithm

02/05/08 - 18:00 PST - Posted by Mike Day, Senior Engine Programmer

I was just telling Mark about some stuff I’ve been doing at home lately, and I thought I’d share. I’ve been trying my hand at the so-called Remez Exchange Algorithm, which is an iterative technique for finding coefficients of minimax polynomials (see example below). The results are promising… I’ve already got it to a state where we could actually use it, and once I’ve done a bit more work on it, maybe during post, I’ll put it up somewhere and write a wiki page on it.

For example…

Take a familiar case – the sine function. When you call sin() in a floating-point library (or the single-precision sinf(), where sin() is double-precision), or indeed on your calculator, you can more or less guarantee it’s doing something along these lines… first, range-reduction to map all argument values into the range 0 to 90 degrees (or -90 to 90), then, computing a 9th degree polynomial that looks like this:

c1*x + c3*x^3 + c5*x^5 + c7*x^7 + c9*x^9

The coefficients c1-c9 will have been specially chosen to ensure that the polynomial gives the best result over the given range – where ‘best’ means minimizing the worst-case error, either absolute or relative. Incidentally the reason for choosing degree 9 is that it’s accurate enough to give the nearest floating-point number to the true answer. Degree 7 isn’t enough, and degree 11 or more would be wasted effort in a single-precision answer. But the question is, where do you get the coefficients from? I’ve always had to look them up in a book. But where did the book get them from? And what would you do if the function you wanted wasn’t in the book? Or if you wanted a different degree of polynomial, perhaps for a coarser but faster approximation?

The answer seems to be the Remez Exchange Algorithm, till now I’ve seen as a bit of a black art, and I now realize why – when I tried to track down information that would allow me to implement it, everything seemed to be either much to advanced for me to understand, or too vague and simplistic to permit writing any code. Just about the only middle-ground resource I found was the wikipedia entry for a broader topic – approximation theory (http://en.wikipedia.org/wiki/Approximation_theorya>)

In the case of the above example, here’s what my code produced for the function sin(x*(pi/2)) over the interval -1.0 <= x <= +1.0 (corresponding to -90 to +90 degrees):

c1 = 1.5707963184477
c3 = -0.645963710599868
c5 = 0.0796896789479803
c7 = -0.0046737666126792
c9 = 0.000151485130863276
max relative error = 5.31399e-009


And for comparison, here are the values in the classic Cecil Hastings’ “Approximation for Digital Computers” (first printed over half a century ago!):

c1 = 1.57079631847
c3 = -0.64596371106
c5 = 0.07968967928
c7 = -0.00467376557
c9 = 0.00015148419

They agree pretty closely. I don’t yet know the main source of the differences; the wikipedia entry does point out that you need a much higher precision in computing the coefficients than the target precision, so it could just be that. (And now I think about it, I don’t actually know for sure that the figures in the book are closer to the correct answer than are mine.) However, it doesn’t really matter because the best way to judge the goodness of the results is to look at the error curve. The attached “my sin.jpg” shows the sin function in green, and the error curve – enormously magnified – in red. The fact that the tops and bottoms of the error curve all line up horizontally shows that this is the desired minimax polynomial, or near as dammit. Any other polynomial would have an error curve with a bigger peak somewhere.

Now compare that red error curve to the error curve shown in the Hastings book, which I scanned as “textbook sin.jpg”. Again, pretty close!

Incidentally, many of us faced with the problem of computing a function like sine would be tempted to use a Taylor series expansion about zero – I know I would have, before I knew about minimax polynomials – and this would be a disastrously bad thing to do. For comparison between minimax and Taylor, see the attached “taylor_sin.jpg”, where the yellow curve shows the relative error for the Taylor series expansion about 0, to the same number of terms (i.e. degree 9). You can see that towards the right-hand side of the interval, it zooms up to some massive value compared to the red minimax error curve. It’s actually 3 orders of magnitude bigger at its highest point. So, don’t ever use a Taylor series again.

What can we use Remez for? Well, among other things, we can overhaul and expand our spu and ppu maths libraries. It will help us find implementations for the functions we don’t have and for which we must currently fall back on slow library routines. I think it can also improve some of the implementations we do have, where we may have acquired code from somewhere without really understanding how it works or how it can be improved. But it can also be used when you want the fastest speed but you’re not so concerned about accuracy – say you wanted an exp() function that only computed up to degree 4 or something. Indeed, minimax polynomials can be combined with floating point abuse for doing range-reduction in certain functions (e.g. log2, log10, ln, exp, sqrt, 1/sqrt, pow) to get some really fast implementations which I definitely plan on using.

Mike