25 August 1998
Y2K - When Does It Really End
As was mentioned before, it will soon be the end of the millennium. It is now the 25th of August 1998. If you are reading this in the year 2567, it may not be obvious to you, but the millennium does not end at the end of the year 1999. It ends, rather, at the end of the year 2000. This is easy to see if you remember that the first year was the year 1, so the end of the decade is the year 10, not the year nine. Similarly, the end of the first century was the year 100, not the year 99, and the end of the first millennium was the year 1000, not 1999. A number of people, however, refuse to believe this. It is "obvious" to them that the end of the century is the year 1999 because the year 2000 is obviously part of the next century, not part of this one. Other's of us figure that we will go ahead and celebrate it with them because we can still celebrate the real one when it comes again a year later. This way we will get two parties out of it.
Having said that, there is a reason why the end of next year will be a big event as well. We are facing the Year 2000, or Y2K problem. I have no doubt this reference will be meaningless to you through the distance of time. Remember that 50 years ago there were no computers around to speak of. They began to be used by big businesses in the 1960's or so. At the time, the memory in computers was very expensive. I can remember that one of the first computers I ever programmed on used 6 bit characters and had a MAXIMUM of 16,000 characters of memory for storing both data and programs. Most of them had no local storage such as tape or disk. You had to record data by punching holes in "cards". The pattern of the holes stood for bits that recorded the data. The machines would read the cards, process the information, and either print output on paper or punch it into more cards or both. Since the space on the cards was quite limited, as was the memory in the machine, we went to incredible lengths to save each single character of space. It was obvious that it was not necessary to store all 4 digits of a year when they "always" had 19 as the first 2 digits. To save space, dates were always stored in the cards with only the last 2 digits. The 19 was assumed.
In the backs of our minds we were aware that this was not a practice that would work forever. But we were young, for the most part, and we figured that we would burn that bridge when we got there. Many of us probably figured we would be dead or at least retired before it became a problem. At that time most computer systems were used for no more than 5 years. By that time so much progress had been made that it was no longer economical to continue to use the old machines - the new ones were so much faster and so much cheaper. Until the last few years there was no problem with this. Around the end of the 1980's however, the knowledge that there was a problem lurking here began to come to the attention of the general public.
The problem is this: Suppose it is 1991, and you want to buy an insurance policy that will last for 10 years. If we are only using the last two digits of the year in our calculation, we will take the start year of 90, add 10 years to it, and come up with 100. We will keep only the last two digits, 00. Then, just to check to make sure that everything is well with this calculation we check to see that the ending date is after the start date. We see that the ending date is 00, so it is less than the start date. So we reject the transaction. That is the best case. In the worst case, we do not reject it, but do something outrageous like calculate -90 years of interest and "debit" the customer's account, in effect crediting the customer.
That might not be too serious, but consider a computer running an electrical generating plant. For whatever reason, when the date rolls over from 31 Dec 1999 to 1 Jan 2000, if it is only looking at the last two digits of the calculation, there is no telling what it might decide to do - shutting down the plant might be the least harmful.
Now the real rub is this. Figuring out what that computer might do in this case can be a non-trivial problem. How do you go about looking through a program that you did not write to make sure that somewhere in the (perhaps) hundreds of thousands of lines of instructions to the computer there is not some calculation that is going to be ruined by this truncation. Business that have put off dealing with this problem are now being faced with having to scurry about and spend huge chunks of money to solve the problem. Programmers who retired years ago are coming out of retirement and working on programs searching for these Y2K bugs. They are being offered very high salaries and recruiting bonuses to do this work.
A more interesting phenomena is also seen. Computers are now imbedded in many systems that control things from flying airplanes to sending out tax bills to running electrical plants. Some systems like the telephone are really nothing more that huge special purpose computers. If large numbers of such systems all fail at the same time, some people fear that civilization might collapse. There is an element of society that is taking this threat very seriously. Some are selling most of their assets, converting them to precious metals, and moving to remote parts of the country where they stockpile food, water, etc. against this anticipated collapse.
Other folks feel that this is foolish - that even if the problem is not completely under control by that time, that the worst that will happen is that the federal government will not be able to send out tax bills. It is highly unlikely that if a pilot is flying an airplane at midnight that night that he will be letting the plane fly on auto-pilot and be taking a nap just then.