Pointing Tests: March, 1998


 

At this writing, none of the data obtained during the pointing tests have been analyzed. I will fill out this report with the results of the analyses after they are completed.

Following is a description of the tests we did and preliminary conclusions where appropriate.

Co-conspirators: Charlie Kaminski and Doug Toomey.

1. Sky map run with MIM off the telescope

We did a complete sky map run measuring the pointing error at 121 stars. The pointing we experienced as we were doing the sky map was surprisingly good, though we don't have quantitative numbers for this particular comparison. It seems as if every star was within 10 arcseconds or so on the sky map slews. Each slew was made with assistance of the previous sky map's pointing data (12/97), and since we were repeating the same pattern that was used to derive the previous error surface, it would seem that the current error surface is close to the previous one. This has two ramifications; first, the MIM does not appreciably distort the error surface, and second, whatever is disturbing the pointing is pattern-related, since pointing is demonstrably worse for non- sky map patterns such as the benchmark pattern.

The RMS fit of the data set to the sky map was 2.1 arcsec in HA, 3.6 in declination, about the same as the December run with MIM on (2.5 and 3.6).

Pointing benchmark after the sky map

The benchmark pattern is 19 stars in a raster pattern, with stars separated by 30 degrees nominal.

It was obvious during the post sky map benchmark that pointing was as bad as has been experienced in the recent past. Pointing is bipolar; of the 16 slew targets in the 19-point benchmark pattern (3 were not visible), 12 had slew errors of less than 10 arcsec on both axes, many less than 5 arcsec. Three other slews were terrible; 26 and 10, 12 and 23, and 9 and 28 arcsec. One slew was intermediate at 11 and 5 arcsec, not too bad. Pointing is usually either quite good or terrible on a particular slew, no gradation or apparent bell curve. When a slew is bad, it is bad on both axes.

Conclusions

1. Whatever is causing the bad pointing is operating on both axes simultaneously. This rules out the electronics drive, unless it is defective the same way on both axes, highly unlikely (but see "Encoder slippage" below). So what could couple into both axes? If the primary mirror support or restraint is unstable; if the secondary mirror is loose, or the spider vanes relaxing in such a way as to tilt the secondary in both axes; if the on-axis camera or the detector within it is shifting; others?

2. The MIM and its counterweights are not the direct cause of pointing deterioration. The pointing seems to be about the same whether we use an error surface derived from the MIM-equipped telescope or the error surface with MIM off. Doug Neill suggested that there could conceivably be telescope bearing damage due to the MIM, for which the telescope and its mechanical drive weren't designed. If so, we would not expect to see improvement when MIM was removed.

3. The fact that the error surface is fit moderately well (2 to 3 arcsec RMS) with few or no bad outliers, yet there are bad outliers on the benchmark and during regular operation, suggests that the pointing is pattern-dependent. If you repeat a pattern of slews, the pointing is quite reproducible, but if you slew to the same objects using a different pattern, some bad slews will result. The pointing algorithm (in fact, any pointing algorithm) depends on the same slew error occurring at a particular destination in the sky no matter which direction you slew from. If the slew error changes just because you approached a star from a different direction, there is no practical way of predicting the slew error. It looks like this is what's happening.

Suggestions

1. Inspect the telescope drive bearings and gears insofar as possible without disassembly to see if there is apparent wear such as flats, scoring, etc. Careful with your clothes; the gears are greasy.

2. Inspect the spider and chopping secondary mirror mount for looseness. Pointing is also bad with the tip-tilt top end, so I doubt that the secondary is the main cause, unless the same problem is occurring on both top ends, highly unlikely. Still, we should inspect the top end.

3. Inspect the on-axis camera, its detector and its mounting. What about the rotator? Is it clamped down tight, or is it shifting in angle with backlash? Is it rocking?

2. Slew repeatability and mirror movement tests

Doug Toomey executed a computer program that monitors the 5 mirror position sensors. We slewed back and forth many times between two stars 15 deg. apart on both axes, near the position in the sky where one of the bad benchmark slews occurred, looking for a wild slew and a corresponding mirror position shift. No wild slews occurred, nor were there any untoward mirror shifts. We then chose two stars 30 deg. apart corresponding more closely to the bad benchmark slew; no mirror shift or wild slew occurred over many round trips. The slewing was quite accurate, time after time.

What we did notice was a progressive creep in the slew setting correction on both axes which indicates encoder slippage. The incremental encoders, on which we base the coordinate position reference, are friction driven and will always experience a certain amount of slippage on the drive wheel machined surfaces that they engage (microcreep, as the mechanical engineers call it). We have always seen this slippage, even in the early days. It didn't seem to bother us then, since pointing was excellent and quite reproducible. However, perhaps the magnitude of slippage has changed. Theoretically, encoder slippage could cause the pointing problems we experience. It would distort the analytical error surface smoothly, so the surface could be fit well, but slews using that error surface might be poor due to the encoder slippage for a particular slew not corresponding to the slippage that had occurred during the pointing sky map. In a word, the pointing would be pattern-dependent, which seems to be what we have here.

There is more on encoder slippage in this report below.

Conclusions

1. Pointing seems to be quite accurately reproducible on back and forth slews between two stars. At no time did we notice a large deviation from the usual slew setting error as we repeated the two-star pattern. We did notice a consistent creep in one direction of the position reference on both axes as the pattern was repeated, to which I attribute incremental encoder slippage.

2. There was no discernable out-of-range primary mirror motion within its cell. Mirror stabilization and support seem to be quite good.

3. Incremental encoder slippage test runs

For some time now I have been working (off and on) with incremental encoder slippage measurements. I would like to develop a formula for slippage relating the amount of slippage to the direction, displacement, and axis of slew motion. If I can develop such a formula, I can encode it in the slew program and compensate for encoder slippage, at least to the first order. If this is the primary cause of the poor pointing (I doubt it), we would then have cured the problem. More likely, the encoder slippage is a contributing factor. Anyway, it would be a good idea to reduce the effect of encoder slippage as much as we can. With this in mind, we ran the following tests.

I made up a slew pattern for each axis consisting of star positions spaced 10 or 15 degrees apart along one axis only, over the maximum range for that axis. The range of star positions is repeated four times for the complete pattern. Absolute encoder position (APE) readings were acquired along with slew setting errors for each star. The four repetitions of the axis range, plus the absolute encoder readings, should give me the data to derive incremental encoder slippage values throughout the range of motion for that axis. I will be doing an Excel analysis of this data set.

The absolute position encoders don't have the resolution of the incremental encoders. They are repeatable only within about 2 or 3 arcsec on each axis, best case. The readings from the absolute encoders will be useful however from a statistical standpoint, and may be relevant even on an individual basis considering the range of slippage over four repetitions at each data point.

Suggestion

Charlie suggests doing a sky map using the fixed absolute encoders instead of the incremental encoders that have this slippage. This is a good idea, and is easily doable. The RMS fit will probably be considerably worse, due to the low resolution of the APEs, but there will certainly be no slippage component and if poor slews occur during the benchmark test, this will prove that the main cause of poor pointing is not the incremental encoders. Actually, if I changed the slew program to use the APEs rather than the incrementals, the pointing may well be superior to the current situation, assuming most of the problem is due to the incremental encoders. It's worth a try at the next opportunity.

4. Sky map, initializing the coordinate system at every slew

The usual way of doing a sky map is to initialize the coordinate system on a star near the zenith by pressing Pushbutton 5 on the console. This action sets the system coordinates to the catalog position of the star, reduced to the refracted apparent position. From that point, the slew error for each star over the whole sky map is with reference to that first initialization.

Because of the encoder slippage that the other tests showed, we decided to do a full sky map re-initializing the coordinate reference frame at each star in the sky map by having the computer execute the Pushbutton 5 code just before slewing to the next star. This essentially differentiates the error surface, since each slew error will consist only of the error difference between the starting and end points of the slew, rather than the slew error referred to the starting point of the sky map. But on the other hand, encoder slippage will not accumulate throughout the sky map as before.

The data reduction program that computes the correction coefficients is not programmed for this kind of an error surface. Anyway, we went ahead and reduced the data set and did a benchmark. As expected, the results were very poor. I intend to produce a proper data set by adding slew errors into an accumulator from point to point along the sky map. This will transform the sky map into a properly integrated data set, with a minimal slippage component, which will be operated on correctly by the data reduction program.

Once I have a set of correction coefficients, I will put them on a diskette and send it over to the IRTF to try out. I would hope that an observer would be willing to give up 20 minutes or so to let us do a pointing benchmark. If the results were good, the pointing correction coefficients would be put on line for the benefit of the observer, and we would then have the knowledge that encoder slippage is a big part of our pointing problem.

5. Suggestion: Attach a 12" telescope

Doug Toomey suggested separating the optical and mechanical possible causes of poor pointing by fastening a 12" telescope somewhere on the main telescope mount and running tests. We could do the whole pointing sky map and benchmark, using the 12" telescope as a pointing reference. Perhaps we could mount it in such a way that its image could be projected onto one of our TV focal plane cameras, or maybe a built-in TV camera with a reticle could be purchased as an accessory.

This would immediately separate out the optics from the mount. If identically poor pointing performance occurred using the attached 12" telescope, we wouldn't have to worry about primary mirror support or secondary mirror or spider considerations. I suggest that Doug proceed with this idea. Let's at least line up the equipment and plan the installation. If the analysis of the data acquired during the recent pointing run still leaves us without a solution, installing the 12" telescope on the IRTF will help greatly now and in the future when such questions could easily arise again.

Summary of analysis tasks:

1. Pointing run with MIM off:
The APE positions are acquired during a sky map along with the coordinates derived from the incremental encoders. I would like to compare the sky map APE positions to the APE positions for the benchmark bad slews. If the APEs showed that the slew displacements as controlled by the incremental encoders were pretty much correct, then an optics shift must be considered. If the APE displacements for the bad slew were markedly different that the incremental encoder displacements, then the incrementals are suspect, due to slippage or other causes.

2. Slew repeatability: back and forth star tests
I'll plot or otherwise derive an average coordinate reference slip per round trip for each axis. Let's see how regular the net slippage is.

3. Incremental encoder slippage test runs:
Set up the data set in an Excel spreadsheet. Determine the slippage per unit displacement at the various sky points at which data were taken. This test will categorize the slippage in both directions, so we aren't limited to net slippage as we are in the two-star back and forth tests. Look for slippage obeying some law that we can eventually encode into the slew program. Compare incremental encoder reference positions with APE positions. If we can correct the slippage with an analytical expression, we should be able to derive incremental encoder positions that match up with the APE positions in the data set.

4. Sky map using PB5 initialization on each star:
The principal benefit of this test was to eliminate most of the encoder slippage from consideration.
Prepare an adjusted data set by accumulating slew errors sequentially from point to point on the acquired sky map. Derive a set of pointing coefficients from the adjusted data set and an RMS fit. If it looks good, redo the benchmark slews on paper by plugging in the newly derived pointing errors at each benchmark star. If necessary, use the APE data to try to reproduce what the benchmark slews would have done if the adjusted sky map data set had been used instead. Load the adjusted set of pointing correction coefficients onto a TCS 8" floppy diskette and send over to be tried at night.

5. Other analysis techniques? Suggestions?

Jim Harwood