summaryrefslogtreecommitdiffstats
path: root/sbin/raidctl
diff options
context:
space:
mode:
authorru <ru@FreeBSD.org>2002-12-12 17:26:04 +0000
committerru <ru@FreeBSD.org>2002-12-12 17:26:04 +0000
commit041d1287e88250bf06ad159c6c696bd653a77957 (patch)
tree8a8960200349aa661a39654202f6e0adc5e7360e /sbin/raidctl
parent719bf117173e17f5286b995c031a11d553149e50 (diff)
downloadFreeBSD-src-041d1287e88250bf06ad159c6c696bd653a77957.zip
FreeBSD-src-041d1287e88250bf06ad159c6c696bd653a77957.tar.gz
Uniformly refer to a file system as "file system".
Approved by: re
Diffstat (limited to 'sbin/raidctl')
-rw-r--r--sbin/raidctl/raidctl.846
1 files changed, 23 insertions, 23 deletions
diff --git a/sbin/raidctl/raidctl.8 b/sbin/raidctl/raidctl.8
index c9a3974..45d1211 100644
--- a/sbin/raidctl/raidctl.8
+++ b/sbin/raidctl/raidctl.8
@@ -146,7 +146,7 @@ as a hot spare for the device
Make the RAID set auto-configurable. The RAID set will be
automatically configured at boot
.Ar before
-the root filesystem is
+the root file system is
mounted. Note that all components of the set must be of type RAID in the
disklabel.
.It Fl A Ic no Ar dev
@@ -195,7 +195,7 @@ the selected device. This
be done for
.Ar all
RAID sets before the RAID device is labeled and before
-filesystems are created on the RAID device.
+file systems are created on the RAID device.
.It Fl I Ar serial_number Ar dev
Initialize the component labels on each component of the device.
.Ar serial_number
@@ -394,7 +394,7 @@ for a more complete configuration file example.
.Sh EXAMPLES
It is highly recommended that before using the RAID driver for real
-filesystems that the system administrator(s) become quite familiar
+file systems that the system administrator(s) become quite familiar
with the use of
.Nm ,
and that they understand how the component reconstruction process
@@ -622,7 +622,7 @@ it is then safe to perform
.Xr newfs 8 ,
or
.Xr fsck 8
-on the device or its filesystems, and then to mount the filesystems
+on the device or its file systems, and then to mount the file systems
for use.
.Pp
Under certain circumstances (e.g. the additional component has not
@@ -680,7 +680,7 @@ raidctl -P raid0
is used. Note that re-writing the parity can be done while
other operations on the RAID set are taking place (e.g. while doing a
.Xr fsck 8
-on a filesystem on the RAID set). However: for maximum effectiveness
+on a file system on the RAID set). However: for maximum effectiveness
of the RAID set, the parity should be known to be correct before any
data on the set is modified.
.Pp
@@ -734,7 +734,7 @@ are the component lines which read
and the
.Sq Parity status
line which indicates that the parity is up-to-date. Note that if
-there are filesystems open on the RAID set, the individual components
+there are file systems open on the RAID set, the individual components
will not be
.Sq clean
but the set as a whole can still be clean.
@@ -995,19 +995,19 @@ raidctl -A no raid0
.Ed
.Pp
RAID sets which are auto-configurable will be configured before the
-root filesystem is mounted. These RAID sets are thus available for
-use as a root filesystem, or for any other filesystem. A primary
+root file system is mounted. These RAID sets are thus available for
+use as a root file system, or for any other file system. A primary
advantage of using the auto-configuration is that RAID components
become more independent of the disks they reside on. For example,
SCSI ID's can change, but auto-configured sets will always be
configured correctly, even if the SCSI ID's of the component disks
have become scrambled.
.Pp
-Having a system's root filesystem (/) on a RAID set is also allowed,
+Having a system's root file system (/) on a RAID set is also allowed,
with the
.Sq a
partition of such a RAID set being used for /.
-To use raid0a as the root filesystem, simply use:
+To use raid0a as the root file system, simply use:
.Bd -unfilled -offset indent
raidctl -A root raid0
.Ed
@@ -1019,9 +1019,9 @@ arguments.
Note that kernels can only be directly read from RAID 1 components on
alpha and pmax architectures. On those architectures, the
.Dv FS_RAID
-filesystem is recognized by the bootblocks, and will properly load the
+file system is recognized by the bootblocks, and will properly load the
kernel directly from a RAID 1 component. For other architectures, or
-to support the root filesystem on other RAID sets, some other
+to support the root file system on other RAID sets, some other
mechanism must be used to get a kernel booting. For example, a small
partition containing only the secondary boot-blocks and an alternate
kernel (or two) could be used. Once a kernel is booting however, and
@@ -1039,7 +1039,7 @@ NetBSD installation.
.It
wd1a - also contains a complete, bootable, basic NetBSD installation.
.It
-wd0e and wd1e - a RAID 1 set, raid0, used for the root filesystem.
+wd0e and wd1e - a RAID 1 set, raid0, used for the root file system.
.It
wd0f and wd1f - a RAID 1 set, raid1, which will be used only for
swap space.
@@ -1051,7 +1051,7 @@ wd0h and wd0h - a RAID 1 set, raid3, if desired.
.El
.Pp
RAID sets raid0, raid1, and raid2 are all marked as
-auto-configurable. raid0 is marked as being a root filesystem.
+auto-configurable. raid0 is marked as being a root file system.
When new kernels are installed, the kernel is not only copied to /,
but also to wd0a and wd1a. The kernel on wd0a is required, since that
is the kernel the system boots from. The kernel on wd1a is also
@@ -1059,9 +1059,9 @@ required, since that will be the kernel used should wd0 fail. The
important point here is to have redundant copies of the kernel
available, in the event that one of the drives fail.
.Pp
-There is no requirement that the root filesystem be on the same disk
+There is no requirement that the root file system be on the same disk
as the kernel. For example, obtaining the kernel from wd0a, and using
-da0s1e and da1s1e for raid0, and the root filesystem, is fine. It
+da0s1e and da1s1e for raid0, and the root file system, is fine. It
.Ar is
critical, however, that there be multiple kernels available, in the
event of media failure.
@@ -1110,7 +1110,7 @@ Distribution of components among controllers
.It
IO bandwidth
.It
-Filesystem access patterns
+File system access patterns
.It
CPU speed
.El
@@ -1155,7 +1155,7 @@ problem in the real world, it may be useful to ensure that stripe
sizes are small enough that a
.Sq large IO
from the system will use exactly one large stripe write. As is seen
-later, there are some filesystem dependencies which may come into play
+later, there are some file system dependencies which may come into play
here as well.
.Pp
Since the size of a
@@ -1167,13 +1167,13 @@ data per stripe is 64 blocks (32K) or 128 blocks (64K). Again,
empirical measurement will provide the best indicators of which
values will yeild better performance.
.Pp
-The parameters used for the filesystem are also critical to good
+The parameters used for the file system are also critical to good
performance. For
.Xr newfs 8 ,
for example, increasing the block size to 32K or 64K may improve
performance dramatically. As well, changing the cylinders-per-group
parameter from 16 to 32 or higher is often not only necessary for
-larger filesystems, but may also have positive performance
+larger file systems, but may also have positive performance
implications.
.Pp
.Ss Summary
@@ -1225,13 +1225,13 @@ disklabel -R -r raid0 /tmp/label
.Ed
.Pp
.It
-Create the filesystem:
+Create the file system:
.Bd -unfilled -offset indent
newfs /dev/rraid0e
.Ed
.Pp
.It
-Mount the filesystem:
+Mount the file system:
.Bd -unfilled -offset indent
mount /dev/raid0e /mnt
.Ed
@@ -1251,7 +1251,7 @@ the /etc/rc scripts.
Certain RAID levels (1, 4, 5, 6, and others) can protect against some
data loss due to component failure. However the loss of two
components of a RAID 4 or 5 system, or the loss of a single component
-of a RAID 0 system will result in the entire filesystem being lost.
+of a RAID 0 system will result in the entire file system being lost.
RAID is
.Ar NOT
a substitute for good backup practices.
OpenPOWER on IntegriCloud