This article is an open invitation to all gear designers and others who specify gear manufacturing tolerances, and those who put nomenclature on drawings to work toward the use of a common standard for gear geometry tolerances and nomenclature.
To ask for a “common standard” sounds redundant. After all, a standard by definition is common. However, like many things in the gear world, things aren’t so simple. When I’m teaching gear beginners about the industry, I often must respond to questions of why the gear industry does things its way by saying “That is just how it is.”
In the prismatic manufacturing world, GD&T (Geometric Dimensioning and Tolerance) drawing nomenclature and tolerance standards are very widespread. Attempts have been made in the gear industry to provide standards, but adaptation has been spotty and slow. In the void, many different methods have come into use to specify gear quality and describe tolerances.
It is understandable how individual companies came to develop their own in-house requirements. In the past, the standards available did not meet the needs of many companies. Perhaps the application required different analysis, or specific characteristics weren’t readily measured by existing standards, so the company developed its own “standard.”
Over time these in-house standards evolved to be quite specific. In some cases companies with multiple plants developed “dialects” of standards, or separate standards for different plants. Each gear design engineer was able to put different twists on the tolerances and specifications. Those who follow may not understand just what was intended, but may not want to make a change for fear of making “bad” gears. An engineer responsible for a particular gear design actually told me that the call-outs on the drawing were there because they were on the old drawing. No one currently involved understood what they meant, but they weren’t going to take them off.
If you have spent your gear design career in one company, using your company’s gear standards, then you may not have been exposed to just how crazy it really is out there. In my roles as gear equipment salesman and gear teacher, I see the mad mix of specifications and call-outs on drawings.
Many times it is necessary to just “know what they mean” rather than be able to directly understand the call-out. For example, on drawings from one large company, a specification for something like this is noted: “Total Index Tooth-to-Tooth Spacing Variation.”
What is really meant here? I interpret this to mean pitch variation, but you could just as easily come up with total accumulated pitch (previously called index) and defend the decision. In the past, the word “spacing” had a specific connotation, comparing two adjacent pitch measurements. Is this what is intended? Because this specification doesn’t follow a common standard, it causes confusion and error.
Another common method is to state that the part is to be made to “DIN 3960” specifications, and then not state a quality class (number), but rather to state individual tolerances for each analyzed element. If a published standard is evoked, such as DIN, it is appropriate to use the standard as published. A table-driven standard like DIN is valid if you stay with the standard. When you stray outside the tables, though, what are you really using as a standard? In effect, you are creating a new type of analysis — neither fish nor fowl. This may be fine in a captive manufacturing plant, where the engineering and production are for in-house consumption. When parts are made outside, or if parts are being made for outside firms, then things get confused in a hurry.
How did we get here? From the early days of mechanical analytical inspection machines with strip chart recorders, the most logical methods were a simple construction line analysis and a “K” band.
How did we get here? From the early days of mechanical analytical inspection machines with strip chart recorders, the most logical methods were a simple construction line analysis and a “K” band.
Without the assistance of a computer to do the analysis, the charts could be “laid off” by the gear guru, and approved or not. The use of plastic templates or overlays was common. The thickness of the pencil line used for drawing the construction line, or the alignment of the template or overlay could mean the difference between acceptance and rejection.
Locally developed methods for analyzing the charts emerged over time. Special terms — understood by those involved and not understood by outsiders — became part of the vocabulary. “Hicks and Nicks,” “Fullness,” “Barrel,” “Protuberances” (not related to hob design), “Peener Tips,” and other terms are still used in specific locations to describe specific gear geometry.
With the advent of computerized analytical inspection equipment, the ability to use averaging and other analysis methods became available. This helped avoid human error in the analysis and improved repeatability and accuracy of the results. The use of least squared best fit line analysis was developed, which helped reduce the apparent variation caused by surface irregularity or scallops.
The computer also allowed creative and imaginative engineers to specify additional types of custom analysis software. On the surface this approach seems to help solve specific analysis problems related to specific product needs or production requirements. More often the custom software adapts customer specific vocabulary and minor variations in analysis that might be addressed by a more universal approach. Over time one inspection equipment company has developed over 30 different customer-specific, custom-analysis types. While this may be good for the software department, you can imagine what happens when one of these companies has gears made by a company that doesn’t have that exact custom software. Either the supplier has to get the specific software or, more likely, make due with something close and approximate the intent of the designed tolerance method.
The AGMA standards of the past have been widely used by gear makers to specify tolerances for gears. These have often been applied to estimating the cost based on the processes needed, rather than actually used for manufacturing process control. The main reason is that the old AGMA “K” band — or any band-type analysis, for that matter — allows only pass or fail determination. No numerical evaluation is created and, therefore, it is of very limited value for process control and manufacturing machine adjustments.
Using only a band analysis is similar to the use of a go/no-go spline gauge. If the test shows the part to be bad, the next questions should be “How bad is it, what is wrong, and how do I fix it?” Neither the band analysis nor the spline gauge provide any information to answer any of those questions. The good news is that the new AGMA 2015 standards address these issues, which we’ll discuss in more detail.
Other localized standards have evolved worldwide. There is the previously mentioned DIN, which is a German standard, JIS 1702-1976 from Japan, and the GB 10095-88, which is a Chinese standard. The ISO 1328 standard was developed as an international standard, but it hasn’t had widespread acceptance or use in the United States.
It can be seen from these observations that the current state of tolerance and analysis in the gear industry is controlled chaos. At a given location, product design and production usually understand what they are trying to make, and the tolerances for that. That being said, I have observed many instances of a disconnect between what is on the drawing and what is really being measured and evaluated.
So what is the path forward? I propose that the AGMA 2015 standard be adopted by gear designers and gear makers. Further, it can be the basis of the next version of ISO standards. In most cases, the AGMA 2015 standard analysis methodology can replace custom analyses developed for individual companies. How can this be? The major breakthrough is the concept of “design profile” and “design helix.”
The 2015 analysis uses the techniques found in DIN and ISO analysis for the determination of error such as total error, angle error, and form error. The big thing is that the gear designer must determine and document the exact desired nominal shape. The analysis then yields the amount of error of the measured workpiece compared not to the theoretical nominal, but to the designed specified and desired shape. How perfect is that? All numerical errors relate back to the desired shape. Any modifications, such as crown and chamfers, are part of the nominal desired shape and, if made perfectly, result in an inspection report that shows no error. This can apply to green and hard parts alike. If your inspection tells you what is different from what is specified, what more is needed?
The standard also offers a classification number to help determine just how close to perfect the gear is, with a lower number indicating a tighter tolerance. This will aid in quoting and processing gears.
As useful as the specified profile concept is, there will always be a need, or perceived need, for a different analyses. One thing most gear engineers can agree on is that you can’t get gear engineers to agree on much of anything. Someone is sure to want to analyze the location of the highest point of the crown, unless the crown is less than 4 microns. In that case, we don’t care about the high point. As useful as that is, I don’t think any standard will codify a specific requirement like that.
We still are a long way from a perfect world, with one unified gear analysis and tolerance system. With the glacial rate of change in the gear industry, our grandchildren may see the widespread implementation of a common standard for gear inspection analysis and tolerances. But that doesn’t change the value of the goal, or the need to get started.