5. System Hardware Configuration
The first thing to consider is the physical placement of the server. If you have
a location with 24 hour staffing, and adequate power, cooling and connectivity,
that's the ideal place to put a server. This would typically be a staffed computer
room, or a security office.
If there is no available location that's staffed continuously, then a lockable
room (or ventilated cabinet) should be used. Only a minimum number of people
should have access to this space. There should be adequate power, cooling (may
be difficult for a lockable ventilated cabinet), and network connectivity.
Most of this subject has little bearing on security, but now is the proper time
consider it. I've searched for information on this topic, but found very little.
to lack of information, I've assembled some background information, and a few
based on my experience.
This section does not cover file-system mount options, which do have an
impact on security. Those options are covered later in this paper
Disks are connected to a computer over channels. The most commonly used types
of channels are IDE, SCSI and Fiber Channel. If your server is performing disk
intensive operations, then an attempt should be made to maximize the number of
IDE disks are relatively inexpensive, in comparison to SCSI or Fiber Channel,
because they are produced in much larger quantities, and because the electronics
disk is simpler. The drawback is that IDE drives are usually slower, and exhibit
of channel contention. If at all possible, use only one disk per IDE channel.
to put a second disk on the channel, one of the disks should have relatively
I/O requirements. No more than two disks can be placed on an IDE channel.
SCSI disks usually have a faster access time than IDE disks, and a faster data
transfer rate to/from the platter. With UNIX, the actual bus speed has little
effect on I/O throughput, as long as it's greater than the platter data speed,
because of the buffering that UNIX performs.
SCSI disks are usually priced between two and three times as high per
as IDE disks. Once the disk I/O requirements exceed what IDE can deliver, it
becomes necessary to move to SCSI. SCSI disks have little channel contention,
until the bus is saturated with data. With a limit of 15 disk drives (more
if Logical Unit Numbers are used), it is not a difficult task to saturate
SCSI bus on a busy system.
Fiber Channel disks (spelled fiber or fibre) are normally built
using the HDA,
and most of the drive electronics of a SCSI drive. Only the physical interface,
and a little bit of the microcode (the software that runs on the actual disk
drive), needs to be changed. The primary advantage of Fiber Channel disks over
SCSI disks is the number of devices that can be put on a single channel (127
vs. 15). Another advantage is that the disks can be located farther from the
computer interfaces are available with either copper or optical connections.
The copper interconnect cable is less expensive that the optical cable, but
the optical interconnect cable is immune to electrical interference, and can
be used on longer runs. In any case, the interface to the disk drive is always
copper, with any necessary conversion being done in the chassis.
The pricing for Fiber Channel disks is similar to the pricing for SCSI disks.
The main reason for the price similarity is that the HDA (Head Disk Assembly)
a given SCSI disk is normally also used in a Fiber Channel disk, with different
When there is activity on a single disk drive from multiple sources, this is called
drive contention. Proper file-system layout can minimize this, but it usually
requires a significant amount of knowledge about the I/O patterns of each file-system.
The knowledge to lay out file-systems to minimize drive contention is usually
gained from painful experience, and is difficult to put clearly into words.
As part of the Solaris installation, you will be given a choice between a Custom
and an Automatic (default) file-system layout. If you select Custom,
you will be asked to enter the information about the file-system layout that you
want. The following should be kept in mind when the file-system layout is entered.
- It is necessary to have both a root and a swap partition.
In fact, multiple swap partitions are supported, and, under some
circumstances, might be appropriate.
- It is possible to have a separate /usr file-system. On desktop
workstations, this may be a good idea, but I don't suggest that it be done
on a server. One of the problems is that the only way to check the /usr file-system
is to boot from an external source (CDROM or Network).
- I suggest that you establish a separate /var file-system. This
is the file-system that is used to spool information to be processed, or that
has been processed and is waiting to be returned to the user, and to store
logs. It is also the file-system that is used by Solaris to store information
on the packages and patches that have been installed. If you do not create
a separate /var file-system, then an action that generates large
amounts of data for the /var directory tree could easily fill the
root file-system, causing the system to exhibit erratic behavior.
Additionally, if the system is a mail server, then it is often appropriate
to have a separate /var/spool/mail file-system. The partition for
this file-system should be made significantly larger that you expect will
be needed. Other examples of the /var file-system needing to be
further subdivided are /var/log on a log server, /var/adm
on an accounting server, /var/spool/mqueue on an outgoing mail
server, and /var/spool/lpd for a print server.
- Many of the SUN optional packages, and third-party binary packages, are
installed under the /opt directory tree. If you use many of these,
it might be a good idea to have a separate /opt file-system. As with
the /var file-system, the goal is to keep the root file-system
from filling up. Again, the partition for this file-system should be made
significantly larger that you expect will be needed.
- Many of the freeware and source (i.e. GNU) packages are normally installed
under the /usr/local directory tree. If there are more than just
a few of these, I suggest that you have a separate /usr/local file-system.
This will help to keep the /usr file-system (or the root
file-system where there is no /usr file-system) from filling up.
Again, the partition for this file-system should be made significantly larger
that you expect will be needed.
- The /tmp and /var/run directories are, by default, mounted
on top of the swap partition (file-system type of tmpfs).
This means that they use the same disk space as the swap partition.
It is possible that this could cause a server to run out of swap space,
/tmp space (space in /var/run shouldn't be a problem).
If either of these problems arises, or is expected to arise because of an
heavy load on the /tmp file-system, then it is suggested that a
separate partition be allocated for the /tmp file-system. Alternatively,
it might be appropriate to allocate more space to the swap partition(s).
Never allow the /tmp directory
to be left in the root partition.
This is a tradeoff between speed (tmpfs is very fast, because
it's heavily buffered in memory) and contention (is memory to be used for
tmpfs or programs). This sort of decision often requires benchmarking
to determine the best solution.
First, it should be noted that calling anything on a Solaris system swap is
a misnomer. In reality, the swap partition is used for paging. The name is
a caryover from the early days of UNIX, when paging wasn't supported, and entire
programs had to be moved from memory to disk (swapped).
The swap space on a Solaris system functions as an extension of the
memory on a system. Disk space that is used in this manner is referred to as virtual
memory. When real memory (the RAM in a system) becomes full,
the operating system will move portions of programs (called pages; usually
4096 bytes per
page) onto virtual memory. The act of moving pages between real
virtual memory is called paging.
Keeping track of these virtual memory pages isn't significantly more complex
than keeping track of real memory pages. The problem is that paging from virtual
memory back into real memory can be a time consuming task.
When determining the amount of disk space to allocate for swap, you
need to consider the maximum possible system memory usage. The sum of real
virtual memory (swap space) should be well in excess of the maximum possible
system memory usage. This is primarily because applications have a tendency
to grow, and users always seem to be able to find something new to run on a
system. Although it is theoretically possible to run a Solaris system with
no swap space, only an expert should attempt to do so.
Also, when Solaris performs a crash dump, it places the dump into the swap
area. As part of the reboot, this dump is read into the /var/crash directory
(if dumps are enabled; they are by default). If there is not adequate space
in swap to store the dump, then it will be lost. For this reason, it is advised
that the swap space be at least as large as real memory. The operation of crash
dumps can be altered with the dumpadm command.
Finally, the file-system type of tmpfs uses both real and virtual memory.
This creates a very fast file-system, as much of the file-system structure
resides in memory. Unfortunately, this file-system format is also transient
in nature, as it is lost each time the system is rebooted. By default, the
/tmp and /var/run file-systems are mounted as type tmpfs. There are several
kernel tuning parameters that adjust the functionality of tmpfs. They are discussed
in the Solaris Tunable Parameters Reference Manual (9).
When you install a system, you should have a good idea as to how much network
connectivity the system will need. If the system is placed in a location where
the network connections need to be run, it would be a good idea to make sure that
the number of connections run to that location is at least twice the number needed.
Allowing for growth will increase the amount of time until additional or replacement
network connections need to be run.
Additionally, a server should never be connected to a hub. If possible, a
server should receive a dedicated switched port to maximize the bandwidth actually
available to the server.