by Rick Ratchford
Although my career started off in various facets of the Computer Sciences (technical, programming, instructor), in the late 80's I became fascinated with the idea of trading in the Futures and Commodity markets. So in 1989, I responded to a mail order course to learn about this highly leveraged and high risk/high reward endeavor and jumped right in.
What really fascinated me about trading was the challenge of market timing. Putting all other aspects of trading aside in this article, the market timing component attracted me more than anything else and I knew it would not be an easy one to master. There are just too many variables to deal with that it became obvious to me that striving for precision timing would be a lifelong work. One can only hope to get real close to exact and hopefully most of the time. 100% precision 100% of the time is just wishful thinking.
Since 1989 I have tried many theories and applied many indicators in various combinations and settings. Like many, I've taken suggestions from trading magazines and books only to find very few items worth holding on to. And even what little I held onto from these public sources never produced results I would consider 'precise'. However, all this was not for naught. In time I came to discover market cycles and later to discover for myself an approach to cycle extraction that has proven time and time again to be very precise. However, I must qualify my reference to 'precise' with a disclaimer. It is precise 'a high percentage of the time as opposed to 100% of the time'. And this precision is based on a maximum margin of error of +/ one price bar for the time frame being forecasted.
Naturally when such precision is claimed possible there will follow the skeptics and critics. And anyone who knows who I am also knows that I have quite a following of such skeptics. Normally this is born out from a lack of understanding on the part of the skeptic and further incensed when this fact is brought out or simply because suggesting that such precision is possible can be construed as 'arrogant'. Perhaps. But if it can be done, and I say it can and is, and then I must live with the consequence of making such a statement and its arrogant undertones. Softtoeing the subject to lessen the accuracy of its statement to avoid the sensitivities of some who wish to deny the possibilities is only veiling truth in favor of ignorance. No advancements are possible until one can learn to accept the possibilities.
As stated earlier in this article, there are many variables that one must take into account when it comes to precision. And for anyone to be able to take all the variables into account may someday be possible, but not at this time. However, taking as many variables into account is within reach now and so it becomes important to formulate rules or techniques to take advantage of what we can do and help protect from losses due to what we cannot do. And so along with forecasting market turns based on my cycle extraction model, I have a set of guidelines (suggestions) on how to get the most from the output of this model. No, it is not perfect, but it does form a very good foundation to build on.
One of the biggest criticisms I hear from some is that of allowing a +/ one bar deviation or margin of error in my results. In other words, if my computer model on cycles forecasts a market top or bottom for July 27 and it actually occurs with the July 28th price bar, I consider this to be an accurate forecast. Some critics will disagree. What has always been the case, however, is that these ones never offer a more precise approach in their arguments. Therefore, as far as what is available today, one bar margin of error is pretty darn precise.
COMPUTERS and PRECISION
When you think that there are enough variables to deal with from within the historical market data itself being calculated on, we must also do all these calculations using computers. Today's computers are extremely powerful and can do a lot of calculations in a very short period of time. When I started with computers back in 1973 at McDonnell Aircraft Co. in Long Beach, Ca. (as a student of Computer Programming), they were very large machines in comparison to today and only a few large corporations could afford their hefty price tag. Today, many people have machines many times more powerful sitting on a table somewhere in their home than those I started out on in the 70's. But no matter how powerful they are today, one thing still remains a problem today as it did then, and that is FloatingPoint mathematics.
Without getting too deep into the inner workings of computers, it has always been somewhat of an issue dealing with floatingpoints. Modern computers have now what is called a Floatingpoint processor that decades ago did not exist. The computer is made to do some pretty fancy tricks in order to turn ordinary bits (0's and 1's) such as 11010111001 into a fractional representation. And it does a pretty darn good job of it.
However, as a programmer I have learned long ago that if you have two separate calculations that result in a floatingpoint value that should be equal, you cannot do a direct comparison and expect the computer to respond that the two are equal without providing some margin for error.
Comparing floatingpoint numbers is very dangerous. Given the inaccuracies present in any computation, you should never compare two floatingpoint values to see if they are equal. In a binary floatingpoint format, different computations that produce the same (mathematical) result may differ in their least significant bits. For example, adding 1.31e0 + 1.69e0 should produce 3.00e0. Likewise, adding 1.50e0 + 1.50e0 should produce 3.00e0. However, were you to compare (1.31e0 + 1.69e0) against (1.50e0 + 1.50e0) you might find that these sums are NOT equal to one another. The test for equality succeeds if and only if all bits (or digits) in the two operands are the same. Because it is not necessarily true that two seemingly equivalent floatingpoint computations will produce exactly equal results, a straight comparison for equality may fail when, algebraically, such a comparison should succeed.
The standard way to test for equality between floatingpoint numbers is to determine how much error (or tolerance) you will allow in a comparison, and then check to see if one value is within this error range of the other. The straightforward way to do this is to use a test like the following:
If ((Value1 >= (Value2 – error)) and (Value1 <= (Value2 + error)) then …
A more efficient way to handle this is to use a statement of the form:
If (abs(Value1 – Value2) <= error) then …
Checking two floatingpoint numbers for equality is a very famous problem, and almost every introductory programming text discusses this issue.
Now imagine a program that is designed to load in years of historical price data where each price is a floatingpoint value. Various calculations are performed on this data with results further added, subtracted, multiplied and divided by other fractional results. When you consider all that takes place inside the computer, to end up with results that are only +/ one price bar off is relatively considered precise! And when you consider that these floatingpoint values within the computer must be given some margin for error if you hope to properly compare the results, then logically the final results should be expected to be within a certain tolerance or margin of error as well!
Now consider the +/ one price bar margin for error I mentioned earlier. When you also consider that many markets only trade a few hours each day as opposed to 24hours, being a price bar off is not always to be considered being 24 hours off. As a matter of fact, the resulting error may just be a couple hours or so into the next trading session that shows up on the daily price chart as the next price bar. So in reality, the one bar error is often just an exaggeration brought on by standard price representation (price charts).
So logically then, when in reference to precision timing the markets, it is relative. No hocuspocus or voodoo necessary. Mathematics and the use of powerful computers helps us get a better understanding on what goes on within market price action, and they also help in timing the markets with greater precision than ever before. But it does come with a price due to its inherent weakness. We must learn how to minimize any negative effects resulting from a potential deviation of the results while learning how to get the most out of its precision.
