When I started programming in the 1980s, we didn't have PC's or email and the internet was in its infancy. The organisation I was working for had only recently moved from punched cards to magnetic tape and in order to input and test code you had to book time on a shared dumb terminal. But far and away the biggest issue was computer memory or rather the lack of it and the cost of adding more.
To put this in some sort of perspective in the mid-1970's computer memory could cost anything between US$10 – US$100 per kilobyte, today you can pick up up a 1TB hard drive for US$58. There are 1,073,741,824 kilobyte's in a terabyte, so at 1975 prices that hard drive would cost between US$10,737,418,240 – US$ 107,374,182,400. Which if nothing else shows far the technology has come in the last 40 years.
This was compounded by the fact that computing evolved from the 80 column punched card mechanical data processing of the early 20th century. Where the physical limitations of the card resulted in various techniques being used to compress complex data and calculations down into 80 columns. The popular commercial languages, such as COBOL, that evolved out of this originally stored digits as characters. Hence a 4 digit year would take up 4 characters or precious byes of memory.
Given how frequently the date is referenced in calculations significant savings could be made by truncating the year to 2 digits and assuming the century to be 19. Another common memory saving trick was to strip the vowels out of data names. So meter-sheet-count became Mtr-Sht-Cnt, saving 6 bytes of data. I know one programmer who use to snigger every time he pronounced this.
If programmers thought about this at all, which I suspect most didn't, one of several assumptions was usually made. One I will have moved on to a new job. Two I will have retired. Three the system will be upgraded long before 2000. In short it will be someone else's problem, not mine.
In reality few business could simply afford to throw away their legacy systems and start again from scratch. One organisation I worked for had 5 million lines of code dating back to the early 1980's. Three million lines of code telling the computer what to do and 2 million lines of comments allegedly documenting what the code actually did and the thousands of amendments made it it.
But at least this code could be tested and changed. Embedded systems, where the code was burnt into a computer chip used to control a piece of machinery, could not. In reality the safest solution, where possible, was to turn a piece of equipment off or ensure a programme didn't run over the millennium unless it was actually needed, such as life support or air traffic control.
In the end I'd say those of us who worked on the Y2k problem did a pretty good job in most instances (but not good enough in some cases as you'll see below) and, where people with long forgotten skills were brought out of retirement, a well paid one as well.
The problems that occurred where mostly minor with websites and electronic signs displaying the wrong date, such as the U.S. Naval Observatory master clock, that keeps the US's official time, giving the date on its website as 1 Jan 19100. Some mobile phones also deleted new messages received, rather than the older messages, as the memory filled up on 1 January 2000. Perhaps more seriously in Japan alarms sounded at a nuclear power plant at 2 minutes past midnight and elsewhere radiation-monitoring equipment failed.
Here in the UK on 28 December 1999, 10,000 card swipe machines issued by HSBC stopped working and stores had to rely on paper transactions till they started working again on 1 January. Most tragically of all in Sheffield incorrect Down syndrome test results were sent to 154 pregnant women and two abortions were carried out as a direct result of a Y2K bug. Additionally four babies with Down syndrome were also born to mothers who had been told they were in the low-risk group.
As a final footnote it's worth pointing out that many of those legacy systems we fixed for the Y2K bug still have another millennium bug that will only become apparent in 2100. In that they fail to correctly implement Zeller's congruence for calculating leap years (we got away with it in 2000, because 2000 was a leap year). Besides in the best traditions of programming we've assumed those legacy systems will have long since been replaced by 2100 and in any case case it will be someone's else's problem by then so why worry?
Of course if you were using a 32-bit computer that uses Unix time datatype you'll have have smugly sailed through the millennium without a care in the world. You've your very own Y2038 bug to worry about. Your computer will stop calculating the date correctly at 03:14:07 UTC on 19 January 2038 when it will "wrap around" and store dates and times as a negative number. Which these systems will interpret as having occurred on 13 December 1901 rather than 19 January 2038. This is mostly likely to be a problem for any embedded 32-bit systems that may still be in use in 2038. If I'm still around then I might just dig out my old Amiga out to watch it happen.