TCS Y2K Frequently Asked Questions

Jim Harwood

May 21, 1999

 

How does Forth deal with dates?

How is today's date determined?

What about the time of day? Sidereal time?

What about file dates?

Leap year?

What about precessing across the Y2K boundary?

Y2K - The Real Story

 

 

 

 

 

How does Forth deal with dates?

Forth documentation recommends the Nedjool variety from Tunisia over the California Imperial Valley date - oh, sorry, you mean THAT kind of date. The calendar date is carried in Forth as a binary 16-bit integer. The number represents the elapsed day count since Jan. 0, 1950. (Sorry, we programmers like to start with 0, not 1.) Whenever a month-day-year printout or display is required, Forth formats it from the day count on the fly. On January 1 of this year the day count number was 18,262. The date integer can go up to 32,767, so there are about 40 years left before it will roll over. This day number arrangement is a type of modified Julian date. The day count is in Universal time and therefore rolls over at 1400 hours HST.

 

How is today's date determined?

The TCS Forth system is self-contained, so it has to be told what the date and time are at the start of each session. Since it can't be guaranteed that we will never skip a day or more between startups, we can't just increment the day count every time the clock ticks over 24h as the primary way of keeping up the date, though that is done for proper datekeeping during the session.

The IRTF obtained a satellite time receiver and interfaced it to the Q-bus, so the day number of the current year and time of day in HST are automatically read by Forth when GO is typed, starting a session.

The modified Julian day number for January 0 of the current year is manually edited into the system before the start of each new year by adding the appropriate number of days (365 or 366) to last year's number. This provides the current year. At the start of each session Forth is told what the day number of the current year is from the satellite receiver, or manual entry if we have to. The system uses that to derive the month and day of month.

The Forth system will keep proper track of the day count and therefore increment the year when the UTC clock rolls over at 2 PM HST on Dec. 31, if the system is running that afternoon (it doesn't have to be), but for the night of Jan. 1 the newly edited diskettes with the updated modified Julian date must be used. The rollover day update is just the increment of an integer, and there is no significance in any particular number that that integer is carrying.

Note that the satellite receiver is in HST, and Forth knows enough to do the proper conversion to UTC time, date, and year if the session starts after 2 PM HST on Dec. 31.

New diskettes prepared for the year 2000 are at the IRTF.

 

What about the time of day? Sidereal time?

The satellite receiver gives us Hawaiian Standard Time in BCD to a precision of 100th seconds, read when GO is typed to start a session. This is converted to UTC and apparent sidereal times which are loaded into 32-bit software counters incremented by 50-hz (civil and sidereal) interrupts from a rubidium master clock, accurate to within one second over the lifetime of the universe. The software counters are integers representing the time in units of 50th seconds. When the Universal Time counter rolls over 24h it increments the working UT day count. The logic that displays the year recognizes a year number increment if needed after the day count increments.

The sidereal time for Mauna Kea is initialized at the start of each session using the current civil time/date in a formula with a constant that is manually edited into the system before the start of each year - the apparent sidereal time at Greenwich at January 0, obtained from Table B in the Astronomical Almanac. After the sidereal time counter has been initialized to the apparent sidereal time at Mauna Kea for the present instant, it is maintained at 50hz by a hardwired sidereal-based pulse train from the master clock, and is used to convert hour angle and right ascension on demand. Sidereal time, of course, knows or cares nothing about civil year numbers.

 

What about file dates?

The Forth system we originally purchased had no disk directory services. It was designed to be a real time, embedded, multitasking operating system, not a data processing system. The mass storage facilities in the original (1976) Forth were limited to organizing the disk in sequential 1,024-byte blocks and reading from and writing to absolute block numbers on the medium.

D'anne Thompson, an IRTF software engineer back in the '80's, went to the trouble to create an actual disk operating system supporting named files and a master directory, building it on top of the Forth-supplied block structure. This has made life infinitely easier for programming development. However, the directory system is more primitive than DOS, having no provision for sub-directories and almost no directory management, but it does what we need.

The file creation and modification dates carried in the directory are 16-bit integers containing the modified Julian day number. The dates in the directory listings are formatted in normal 23 JUN 1991 style on the fly from the stored Julian day number. There is never any arithmetic done with combinations of these dates as may be done with business data processing files. In any case, you never see 2-digit dates.

By the way, the date format of 18 AUG 1986 that is universal in our Forth was made that way by a Swedish software engineer that worked with us for awhile at the very start of the project. I always thought the European system was a more logical way to express the date than the American way with the month first, so I let it go.

 

Leap year?

To convert back and forth between a normal day-month-year style and day count of the year, a table of day numbers for the start of each month is carried in the system. The table (starting with day 0 for January) is set up for leap years, with March starting on day 60. Before the start of each year, the person who generates a new set of diskettes for the upcoming year needs to ponder whether or not the upcoming year is a leap year.

There is code in the date initialization program that is executed at the start of each session that subtracts 1 from the day count derived from the table if the day count is greater than 60, to handle non-leap years. If it is to be a leap year, we comment out the decrement code. If this year is a leap year then next year won't be, so for that year we need to remove the comment symbols so the code executes. I suppose logic could have been programmed in to decide all this automatically from the modified Julian date, but it was never done.

Since the year 2000 will be a leap year, a situation that occurs only once every 400 years (aren't you glad you're living through these rare events?), the diskettes for 2000 are edited to comment out the code that decrements the day number after 60, since the table of day numbers of the start of the months is a leap year table. But for 2001, the comment symbols (parentheses) will have to be removed.

Details of exactly what needs to be done to prepare a set of TCS diskettes for an upcoming year will be on a Web site if it isn't there already.

 

What about precessing across the Y2K boundary?

One of the telescope operators (DG) told me that long ago D'Anne mentioned something to him about a possibility of the calculation of the precession of the equinoxes being faulty across the Y2K boundary. Therefore, I ran some tests doing a series of epoch changes, recording the resulting apparent coordinates converted from a single mean point in the sky (around 11h RA and 23º). Actually, we have been using the Bright Star Catalog (FK5) for some time now, which is Equinox 2000.0, so cross boundary precessions have been going on for a couple of years at least with no apparent ill effects.

In order to verify the operation of the precession program (along with nutation and aberration), I made the current date 1995 and, for the test mean coordinate, did the transformation to apparent position for each year up to 2005. Then, I made the current date 2005 and did the same transformations. The results are in the following plot. The ordinate on the plot is relative change of coordinates in secs T or arcsecs due to the mean to apparent conversion.

Mean to apparent coordinate change in units of seconds time or arcseconds from a base date of 1995 or 2005.

 

There isn't any obvious discontinuity at the 2000 epoch. However, I didn't check other coordinates. What D'Anne might have been referring to is a possible lack of precision as the years mount up. I noticed constants of value 1900.0 in the code, and D'Anne may have been somewhat concerned that the computation accuracy may deteriorate over larger and larger intervals of time with the single precision floating point we use. I wouldn't worry, though, about the next few years.

 

Y2K - The Real Story

I can see it now. A 60-Minute type TV crew strides purposefully into a nursing home, heading down the main corridor, looking for a room number. The nursing home staff trot along behind, hands fluttering, mouthing "can I help you" type statements that are completely ignored. The TV crew finds the room number they are looking for and barges in. An elderly invalid is reclining on the bed. The crew grabs him, puts him up against a wall, and sticks a mike in his face. Now on camera, the reporter demands- "Why did you use 2-digit dates when you programmed for Burroughs back in 1956 and therefore are a direct cause of the downfall of technological civilization?"

This scenario might well happen, since programmers are taking the rap unfairly. The shtick is that early computers had very little memory so nerd programmers without a care or thought for the future used only the least significant two digits for the date to save memory and thus make their job easier. Nothing could be further from the truth. Programmers have been fighting to change record formats for at least the last 20 years. So what happened?

At this point, we need to step back and review the whole picture of automated data processing and where it came from. Everything follows from what came first, so we have to go back a ways. Bear with me, this will pay off in the long run.

In the last years of the 19th century, a German immigrant by the name of Hermann Hollerith invented an electromechanical data tabulating machine utilizing punch cards as the medium. The cards were made to be the exact size of American paper currency of the time, in the hopes that eventually the machine might be able to be used to sort currency. The upper left corner of the punch cards was cut off to help with keeping the orientation of the cards correct in the machine. The cards had 80 columns and 12 rows. Square holes were punched in the card to mark bits on. The 12 rows of a column defined a character. Thus, the punch card could carry 80 characters of data in Hollerith code. The tabulating machine was quite efficient at sorting these cards based on the code in specified columns. The business world embraced this machine, which was the product start of what we now know as IBM. The punch card tabulating machine lasted well into the computer age, as a less expensive technology.

In the days of punch cards, it was imperative to keep the data record at no more than 80 characters or a multiple. If you needed 81 or 82 characters, you had to buy another punch card. As a result, considerable effort went into organizing data records to be within that bound. Fields within the record were shortened to the minimum, using whatever clever techniques they could come up with. Dates were truncated to the two least significant characters. After all, the century change was many decades away. You're getting the drift. It wasn't the saving of expensive computer storage that provided the motivation to use 2-digit dates, it was the 80-coulumn punch card. Saving two digits saves 2 1/2 percent of the whole record, and may save having to use an additional punch card.

It is the nature of technology development that a technology retains as much as possible of the previous generation. For example, the first automobiles were modified horse-drawn carriages without the horse. In the case of the development of data processing equipment, commercial computers utilized the Hollerith punch card as the primary input/output data medium. People who were upgrading to computers from the electromechanical sorters were in familiar territory and could work with the same decks of data cards that they used with the old device, and could utilize the same peripheral support equipment, such as off line card readers and punches. As a matter of fact, the first control computer for the 88" telescope, placed on line in 1970, was a card based system, the IBM 1800. Remains of the IBM 1442 80-column card reader/punch are still over there, rusting away in the leaky Quonset hut at the old Hilo airport, which used to be our headquarters in Hilo. You may be able to see some rotting punch cards scattered around there, too.

Now, let's go back to 1956 and imagine that you are the data processing executive of a major company. Your firm has decided to rent one of the newfangled mainframes (they didn't call them that then, they were "giant brains"). You have to convert all your tabulating equipment and punch card databases to the new technology. Are you going to redesign all your corporate and business data records and manually re-enter the old data to magnetic tape on the new processor, or are you going to keep the old record formats? Sure, new records will go to mag tape, but I'll bet you'll stay with the old 80-character record formats which are supported by all the auxiliary equipment and programs that you already have in place, as well as the programs supplied with the new giant brain.

Take a look at video display terminals. They replaced ASR-33 Teletype machines as the primary human input/output device. I'll give you three guesses why terminals have always had 80-character rows as the standard.

Fast forward to the 1980's. Systems analysts and programmers were starting to raise the alarm. A high technical official in the Defense Department even went to the President with warnings about Y2K danger in military systems. He was stonewalled at every step. [I have been trying to find the source of this and the name of the individual to relate here, and will do so on a future update of this document if I find it.] This was repeated over and over, at big organizations and small.

If you were the CEO of a large company with competitors breathing down your neck and your vice president of information systems came to you with a project she said was imperative, and the project was going to cost tens of millions of dollars, disrupt the ongoing operations of the company for years, jeopardize business records, fix something that would not be broken for years or decades to come, if ever - some techie thing having to do with digits for the next century that nobody really knew the depth of, and all this without contributing a cent to the bottom line, would you take this to the board of directors to have a budget allocated? I sure wouldn't.

So, don't blame the business programmers, though they readily admit that they could and should have done more early on. Hermann Hollerith has the primary responsibility for creating the Y2K problem in the first place, with his 80-column punch card. Of course, we shouldn't blame him, he was just inventing a technology which was fine for the time. In recent years it has been up to corporate and government leaders to have the foresight to correct this situation, and until their feet were held to the fire by the immediacy of the coming disaster, they weren't about to bother with it.