IV - Hardware - What You Do and Don't Need
If hardware bothers you then you can probably safely skip this chapter. It is mostly just a high level overview of the PC hardware evolution so that you can see the larger landscape. This will allow you to keep new developments in perspective and to assess their place in your firm.
Force Justification for Deviation.
If your users think they have a compelling reason to deviate from your standards, make them provide an economic justification for the deviation. If you keep any records at all about the time you spend helping users with problems you will be able to show the economics of restricting the choices. The desire not to have to learn a new system is usually behind the desire of the user to use some other package. If they want to take the time and trouble and can justify something else then you will have to live with it. Chances are that they will just go with whatever you have picked.
Do Not Use Macintosh's.
More accurately I should say, Don't mix PC's and Mac's. (Remember that I am using the term PC to refer to those Personal Computers compatible with the IBM family.) For people involved in seriously right brained businesses - art directors, fashion designers, Zen masters - any business that requires you to think in a non-linear way, then you might want to use Mac's exclusively. After all, Mac's are capable of doing left brained work even though that is not their main focus.
Notice that I was referring to right brained businesses, not right brained people in left brained businesses. If you are making toasters, your advertising manager can learn to use some of the fine PC desktop publishing tools. See the section on Software.
I have several reasons for my bias against Mac's. First, I must admit that I learned to use computers starting with text only systems, and so I have somewhat of a chip on my shoulder, and the attitude that "real users only need text." Second, on a more practical note, as an MIS manager you should always act to minimize the number of different hardware and software configurations the firm supports. Supporting three Mac's along with fifty PC's will age you prematurely and will cost the firm money.
Third, is the aspect of critical mass. Pick up the newspaper - look at the help wanted section. See how many help wanted ads call for Mac experience compared with IBM PC experience. Also look at the computer ads in the business section. See how many more programs advertised are for PC's rather than for Mac's. The last time I checked it ran about seven to one. The reason for this is that the PC is an open architecture, i.e., IBM published the specifications of the hardware and software so that other vendors could create additional products. The Mac architecture is not. There will always be more products for PC's than Mac's and more users capable of using them. Since there are no Mac clones, they will stay more expensive. (I might add that the same is true of the proprietary systems such as the IBM PS/2, Sun systems, etc..)
Ignore all that talk about how people work much faster with Mac's or how much they prefer to work with them. In my opinion it's mostly hype. For people who are largely computer illiterate, machines like the Mac are easier to learn. Think of them as training wheels. They do help you to get started if you are not computer literate. After you learn to ride, training wheels restrict how fast you can turn the corners because they keep you from leaning. After a while they just get in the way and slow your work.
The Mac hype implies that any person with no training can turn out quality desktop publishing. I would like to see someone sue Apple about that. Publishing requires several skills that must be studied at some length. The ease of use of the system is not the factor that keeps most people from creating quality presentations. Amateur publishers with a Mac turn out junk just as do amateur publishers with a PC or amateur publishers with no computer at all.
There is one real advantage to the Mac. Since the architecture is closed, the operating system dictates how everything is going to be done. This means that all programs running on a Mac will operate similarly. The PRINT key will be the same in every Mac program. This similarity is the real reason Mac's are easy to learn. Once you use to use any program on a Mac you will be able to use most other Mac programs fairly quickly. This assumes that you understand what the program is about. If you have no idea what a spreadsheet is then you are not going to make much headway on Excel on a Mac just because you have learned Word on a Mac. This similarity limits the system, however. Who is to say that we have already invented all the best ways for interacting with programs?
Do Off Brand PC Prices Look Good? Test Them.
The last time I saw a program that would run on some clones but not on others was over three years ago. These days clone software compatibility is generally not a real problem. If you are worried about compatibility, ask to see your major application(s) run on the machine. Have someone from your office operate the thing for a while. Especially if you are buying several, get one and beat it to death for a while.
Don't Worry Too Much About CPU Speed.
CPU speed is irrelevant for most day-to-day applications outside of AutoCAD, graphics, simulation, heavy desktop publishing, or artificial intelligence applications written in LISP.
Even a 10 MHz XT could easily keep up with any word processing operator. When you are ready for the spelling check you can go for coffee if it's too slow. (Actually, most new word processing systems check the spelling as you go, so even that isn’t an issue.) Once you get into the 486/SX range, you will have all the speed you need for most DOS applications. File servers or other special purpose applications are a different matter to some extent, but even then, most Novell file servers run under 10% CPU utilization most of the time. The main restriction on these machines is in RAM and I/O speeds.
Remember, however, that you can readily add RAM later, while upgrading CPU speed is more costly. If you have to skimp at first or to buy the system one part at a time, go for the CPU speed now and add more RAM later. Eventually buy as much RAM as you can afford. Although many programs cannot use it directly, there are some that can, including, most notably disk cache utilities. Certainly multitasking operating systems will benefit from having as much RAM as they can
Brand Is Not Terribly Important - Being 100% Compatible Is.
The problem with buying machines that are not 100% compatible with the original IBM PC is that the repair parts and add-on hardware are proprietary, and thus are (often) sold for inflated prices. Any money you saved on the original machine you will lose eventually when upgrading or repairing. This is typically true of laptop machines.
Strangely enough, most incompatibility arises not in failure to run certain programs but in physical connections between the parts. A keyboard may use an RJ-11 jack (like a telephone) instead of the DIN plug found on the PC. (The newer keyboards use a smaller connector, but it is still the new de facto standard.) A power supply may plug into the motherboard in a different way. The monitor connection may be a DB-25 instead of the DB-9 or DB-15 used on the PC. A few machines have really unorthodox elements that will work only when a special driver is loaded. A very few are just flat different.
Watch out for this problem particularly with laptop or notebook computers. Since IBM originally didn’t make one, there are few standards, and the vendors have all done some very creative things to fit all that stuff in those little cases. Particularly bad in this regard are hard drives and RAM modules. You will pay a high price to get the proprietary parts necessary to upgrade laptops and notebooks very much. As this market has matured the hard drives have become somewhat standardized and the prices are no longer so outrageous, but you will still pay about twice the price for a laptop drive as you will for the same size drive for a desktop system.
Look for a Standard XT/AT Footprint.
The XT and the AT had one thing in common: the placement of the I/O card slots and the keyboard connector. There should be eight (or nearly so) vertical slots with the keyboard connector immediately to the left (when viewed from the back of the machine.) Although the original AT motherboard was much bigger than the XT, the placement of these external connections was the same. Once clone vendors got to work with the 286 they produced a motherboard with the same shape as the XT. It was commonly called a "Baby AT". This trend continued into the 386 and 486 motherboards. It is quite possible to take an original IBM XT and replace the motherboard with one that has a Pentium II CPU.
Generic clone vendors are very committed to a motherboard and case like this. It will help you protect your hardware investment. One way for a manufacturer to build an inexpensive system is to integrate several controllers onto the motherboard that were originally placed on separate cards. There are three problems with this approach: One was the problem just described - the motherboard cannot readily be replaced when faster CPU's are available. Second, if an integrated controller becomes obsolete, then it sometimes cannot be replaced with newer technology, just as with the CPU. Last, if a component fails it cannot be readily replaced, and soldering on the motherboard is not desirable. More modern systems have an options in the BIOS that will allow you to disable onboard controllers if you need to upgrade one or to bypass a defective one.
Strangely, the larger the manufacturer, the more likely they are to build such a nonstandard system. I assume that it is an urge to build a less expensive system for a reasonable price that motivates these firms and not arrogance or the belief that they can "lock you in" to their hardware. Thus some of the major name brands are the least versatile.
A Short History of PC Buses
Except for the keyboard, all the I/O adapters in a PC plug into slots on the motherboard. These slots are all connected, and the data travels down the wires on the motherboard, passing each slot in turn. Engineers viewed this as being like a bus passing down a street, stopping where it needed to, so they referred to it as a bus.
In the original IBM PC, in the XT, and in their clones the data bus was eight bits (one byte) wide. When IBM introduced the AT they changed the bus width to sixteen bits (two bytes). However, they made the new design compatible with the old one, so that cards designed for the PC/XT bus would work in the AT. The clone industry now refers to this bus design as the ISA bus, or Industry Standard Architecture bus.
The success of the clone vendors drove IBM to design the PS/2 series of computers. The smaller models of this family were AT bus machines. The larger models incorporated a new bus design called a Micro Channel Architecture (MCA) bus. The most significant feature of this bus is that it allows for a 32 bit data path. However, the design is totally incompatible with all previous PC bus controllers. Moreover, IBM has kept this bus proprietary. Although they will license it to designers of controller boards and motherboards, the charges for the motherboard are generally prohibitive. The clone vendors therefore published their own design, the Extended Industry Standard Architecture (EISA) bus. It was upward compatible with all old controller designs and also allowed 32 bit data transfers.
A Scattering of Other Buses.
A number of manufacturers of cards were trying to enhance their products but were being held back by this state of affairs. They needed the greater speed but did not want to go with either of these designs. They begin to create proprietary bus designs. The most notable was probably the VESA bus. It was developed by a group of makers of video adapters and was primarily intended for speeding up these cards.
The PCI Bus.
Intel realized the need for a faster bus to support the ever increasing power of their CPUs. They were concerned about the lack of standards, however, so they led an industry group that designed a new bus called the PCI, or P C I $$$ bus. This new design initially allowed for a 32 bit width and ran at a speed of 32 Mbps. Extensions to the spec later led to 64 bit widths and 64 Mbps speeds. Machines being sold today typically have a few ISA slots for those adapters that are not limited by the bus speed. Typical examples would be modems, serial and parallel ports, scanners and mice. These machines will also have a few PCI slots for higher speed devices such as video, hard disk and network adapters.
Upcoming buses.
There are two new buses coming along in the near future. These are the USB, or Universal Serial Bus and the FireWire bus. The USB interface is designed to support a wide range of low speed devices such as those already mentioned. It may also support home audio systems and appliances as well. It will ultimately replace the older ISA bus. Eventually the operating systems developers want to drop support of the ISA bus altogether because it lacks support for the protocols that let the OS automatically recognize the adapters and configure itself.
The FireWire, or $$$$ bus is designed to support multiple high speed devices without some of the limitations of the current bus designs. This bus is further out in the future than is the USB. USB devices are already on the market. FireWire will probably not completely replace PCI buses for some time.
Avoid Non-standard I/O Cards.
There are many vendors who have made improvements in the designs of adapters. One place where you were especially likely to run into this at one time was in video adapters. A notable example is the Hercules adapter that simulates a color video by using varying intensity and patterns on a monochrome monitor. This is one of the very few cases where a vendor made a big improvement that became widely supported by other vendors. Another example is the family of Expanded RAM cards that conform to the Lotus-Intel-Microsoft (LIM) specification. For the most part these non-standard devices will cause you more trouble than they are worth. Both of these de facto standards are now generally obsolete though you might see references to them from time to time.
A good rule of thumb would be that you should avoid such non-standard devices unless they give you some strategic advantage over your competition or allow some required function that you can not get any other way. I suggest you not buy such devices if there is no better reason than being a little cheaper or a little faster or have a little bit better picture on the monitor. You will have more trouble configuring software for them, you will have more compatibility problems with other hardware, and you will have trouble getting them replaced or repaired.
Other Things to Look For:
Feel the weight of the keyboard - like slamming a car door, good ones feel different from cheap ones. Similarly, wiggle the keys around. They should not move much except in the normal keystroke direction. You will use the keyboard a lot and you will bang on it, so you will want a good one.
In a desktop machine you should get at least a 200 watt power supply in a new system. If the machine is under-powered it will be too hot inside, shortening the lifetime of the parts. Anything over 250 is probably unnecessary except for servers.
If you are offered an optional cache RAM on the motherboard the best size is 256 K. Most research indicates that anything over 256 K is wasted on anything except multi-user systems like Unix or Citrix.
How Big Should the Hard Disk Be?
A rule of thumb for disk space is that most users will need five to ten megabytes of file space for current tasks. If the application is heavily graphics oriented, double that. Windows and the application software will take some space, so plan on at least a 1 GByte drive per user. It will be gone before you know it. Remember Parkinson's law: "Work expands to fill the capacity to do the work." Disk space appears to follow this rule.
Hard Disk Interface Types
There are five types of disk drive interface/controllers that you might encounter. These are usually known as: MFM, RLL, ESDI, SCSI, and IDE. You must match the drive with the controller. That is, a drive designed for IDE can only be used with an IDE controller. There is no clearly superior technique. Each design has its advantages and tradeoffs.
If you find systems with older MFM, RLL or ESDI drives you should probably just give them away. These days it would be impossible to repair them and much new software does not even support them. If you have a legacy system you might just let it run. You will probably have to replace it before the year 2000 anyway.
SCSI, or Small Computer System Interface, drives are the fastest technology, and up to eight devices can be hooked to a SCSI controller. These devices are more reliable than the others. They are also potentially the fastest of all. The eight devices usually includes at least one computer, so you may hear it said that these controllers support seven drives. They can also support multiple computers, so it is possible (though admittedly unusual) to share six drives between two computers.
IDE, or Imbedded Drive Electronics, is a technology that has moved part of the controller onto the drive. The chief advantage of these interfaces is that they are cheap. They are limited to one drive per system. ??? Most new small drives are of this type.
Pick 1 or 2 Models of Printer and Stick With Them.
If you have ten software packages and ten different printer models you can chew up a full time person with nothing but printer control problems. LAN's make the problem even worse because the spooling software will try to help you and will sometimes interfere with the application program's control of the printer.
For example, choose 1 laser for final output, and 1 dot matrix for draft. If possible, use the same brand and the same vendor for both types. Because you will be buying more printers from them you will get more of their attention. Supplies also will be easier to keep in stock and cheaper to buy because you will be buying in greater volume.