Transcendental function programming
Programming and chess have much in common.
Those who delve deep into the mysteries of chess will have discovered that a single move can alter the entire game. The ramifications go deep. Each subsequent move (or "ply" in the jargon) is conditional upon what went before, and the great chess-masters of this world pride themselves in being able to evaluate several ply of consequences.
The elegance of a game of chess is estimated from the efficiency by which the player heads towards his goal. No move must be wasted. There must be no hesitation. There must be no turning back to correct an error. In the final stages - the deeper ply - one sees the reasoning behind the earlier, seemingly trivial moves.
Then the game is over. There is a glow of satisfaction, but nothing more. Perhaps, however, there are books written for the guidance of others - that they also may know the feeling of achievement. Yet in the final analysis it is a GAME. Passions may be aroused, devotees may consider it to be a religion, a philosophy, the highest accomplishment of human intellect - and still it is a game.
Machine code programming has all these facets - and delivers a PRODUCT as its goal. It is a game and an industry combined.
The programming of the transcendental functions involves the use of mathematics. There are various polynomials for the creation of sine, cosine, tangent, the logarithm and antilogarithm. Slavish obedience to set rules may well deliver a working product - but is it the BEST?
Can you become a chess master by memorising a book?
Let us consider the McLaurin-Taylor polynomial for the natural antilogarithm. Do you take 1, then add the argument X? Do you now multiply the X by the X and divide by 2 before adding it on? Do you multiply X by X by X, divide by 2, divide by 3, and also add on? Is this the best?
Horner's rule says you can PRE-divide a large number such as 1 00000000 00000000 00000000 00000000 in binary (FFFFFFFF in Hex, 4294967296 in decimal) by 2, by 3 and so on, and build it into your code. Then the machine does not have to waste time creating constants. The rule also says that you start at the end of the polynomial and work backwards. So if you multiply the last coefficient by X, and add on the penultimate before multiplying again, the last will have X-squared in it whilst the penultimate will have X. So it costs only one addition and one multiplication per term.
But is that the BEST? That depends upon your processor.
Every time you add something to a logarithm it is like multiplying the antilogarithm. Some processors will multiply quickly, others not. So the trick of SHIFTING a number is a good substitute for multiplication. Copy the number. Shift it right. Add the number you first thought of. You have multiplied by ONE-AND-A-HALF without multiplying.
So you can create table of logarithms of one-and-a-half, one-and-a-quarter, one-and-an-eighth and so on. From these you can build an antilogarithm algorithm that differs from McLaurin-Taylor.
And what happens when the number of shifts exceeds half the number of bits in the mathematics? Eureka! an unexpected move! Check Mate.
The program has terminated in only a quarter of the expected time!
Books have been written about chess. Rather fewer have been written about transcendental machine-code mathematics. These things are commercial. They are closely-guarded trade secrets.
I had spent some years developing the most exquisite tricks, and was prepared to write the definitive manual on the subject. However, the British government stole everything from me - leaving me only with the memory of many an elegant "game".
What should I do? Should I continue to hide my knowledge? No. I hastily put down in writing as many salient points as I could remember. Perforce, these are just summaries of the techniques involved. There was no time to dwell upon detail. However, those who need to know the reasoning behind the inner core of floating point, and have time to delve and ponder, may well find this page to be a treasure-trove of programming ideas.
The algorithms can be seen at http://www.wehner.org/fpoint/
Charles Douglas Wehner
Born in 1944, Charles Wehner was involved in the design of computers in 1962 before becoming a design engineer and technical author in photoelectrics, nucleonics and radar.
This article courtesy of http://giantchess.biz.
You may freely reprint this article on your website or in
your newsletter provided this courtesy notice and the author
name and URL remain intact.