General AIX Storage Management

This chapter will explore Storage Management from a more practical aspect. Much of the functionality available with Version 3 is still included in Version 4, so this chapter will provide useful information for Storage Management at either release.

Introduction

The purpose of this chapter is to provide the reader with information, which it is hoped will prove useful for applying the basic concepts of storage management into their business environment. The areas which will be covered will include the management of physical storage devices, volume groups and logical volumes using the Logical Volume Manager (LVM) functions provided in AIX. Consideration will also be given to issues relating to performance, availability and the capability of backing up and restoring.

Managing Physical Volumes

Management of physical volumes involves day-to-day activities that not only ensure that they are installed and configured correctly, but also that they are maintained in a correct operating environment and monitored regularly so that disasters from physical volume failures are prevented, by taking precautionary steps.

For a physical volume to be used for storage purposes, either by the LVM or directly via a low-level interface, it must first be recognized by the system and configured to be in an available state. This can be done fairly quickly and with ease, using the AIX Systems Management Interface Tool (SMIT). Once recognized by the system it can then be included in a volume group and a logical volume or a journaled file system can then be created on it, providing users the ability to store data in it.

When a RISC System/6000* computer system is powered on or rebooted, the AIX operating system will attempt to configure all devices physically connected to it. Some devices may be attached while the system is up, and these can normally be configured using the cfgmgr command, or by using SMIT. The fact that a physical volume is configured and known to the system does not mean that it is ready to be used for storage purposes.

This section will attempt to describe only those areas specifically relating to physical volumes. This will include:

Configuration of Physical Volumes

It is possible to check the configured state of a physical volume which has been correctly installed. This can be done using SMIT or by issuing the following command:
# lsdev -Cc disk

The above command will produce output that will look something like the following:


hdisk0 Available 00-06-00-00 857 MB SCSI Disk Drive
hdisk1 Available 00-06-00-10 857 MB SCSI Disk Drive
hdisk2 Available 00-07-00-00 857 MB SCSI Disk Drive
hdisk3 Available 00-07-00-10 857 MB SCSI Disk Drive
hdisk4 Defined 00-08-00-00 857 MB SCSI Disk Drive
hdisk5 Defined 00-08-00-10 857 MB SCSI Disk Drive

The second column of this output indicates the configured state of each physical volume. Only those disks with an Available state have been successfully configured by the system. It is quite likely that disks in the Defined state were switched off during the configuration process. If this is the case they can be configured, as previously mentioned, by using the command cfgmgr or smit cfgmgr after powering on.


# cfgmgr

Modifying Physical Volume Characteristics

There is very little that can be done with respect to modifying the characteristics of a physical volume. The two characteristics which can be changed should be given important consideration, since they provide a way of controlling the usage of physical volumes.

It may sometimes be necessary to restrict further physical partitions from being allocated to any new or existing logical volumes. When a new logical volume is created or an existing one extended, free physical partitions are allocated to it.

The chpv command can be used to restrict further physical partition allocations from occurring. This can easily be achieved by issuing the following command:


# chpv -an PVname

After issuing the above command any allocations for physical partitions for the physical volume PVname will not be allowed. Note, however, that access to existing journaled file systems and logical volumes is still possible.

If the physical volume characteristics need to be reversed, this can be carried out by issuing the following command:


# chpv -ay PVname

An active physical volume is considered to be in an Available state since it will continue to allow logical I/O to it to occur. The state of a physical volume can be changed from active to not active to stop all logical I/O to it from occurring. There are instances when access to file system or logical volume data needs to be stopped, particularly if the physical volume is partially or wholly damaged, and needs to be repaired or replaced. In a high security environment, it may be necessary to physically remove a disk from the system and secure it in a secure place for overnight periods. This may be achieved by making the physical volume unavailable or Removed.

Whatever the reason, the physical volume can be made unavailable by setting the state to Removed using the command:


# chpv -vr PVname

In the above example, the state of physical volume, PVname, is changed from Available to Removed. To check the state use the lsvg command.

The above modification is only allowed if there are two or more physical volumes in a volume group and more importantly if this action can be performed without losing quorum. In a two-disk volume group, the chpv command will fail if the disk containing two copies of the VGDA/VGSA is being removed, since more than 50% of the VGDA and VGSA copies will be lost. Also, before a physical volume can be made unavailable, all file systems must be unmounted and all open logical volumes must be closed.

To bring the physical volume PVname back into an Available state, thereby allowing logical I/O to the device to occur, the following command needs to be executed:


# chpv -va PVname

Removing Physical Volumes

The configured state of physical volumes is Available when the system is powered on. However, to unconfigure a physical volume, and place it in the Defined state, the rmdev command can be used. Before a physical volume is disconnected from the system, it must be unconfigured. In the Defined state, access to the physical volume by the LVM will be prevented until it is again made available. The rmdev command as invoked below, will result in the change of state of the physical volume from Available to Defined:


# rmdev -l PVname

Although physical volume PVname will be unconfigured, its definition will still remain in the ODM. This information must remain, particularly if the physical volume will be reinstated with the same characteristics as before.

Monitoring Physical Volumes

Physical volume failures can and do occur for many reasons. More often than not they are caused by inadequate operating conditions, such as cables connected to physical volumes left loosely on the floor risking being pulled out, temperature and humidity not controlled properly, physical volumes exposed to direct sunlight and strong magnetic fields. These physical conditions need to be addressed for physical volumes to function properly. The tolerances of physical volume may differ, and these can be obtained from the hardware specifications supplied by the manufacturers.

Failures arising as a result of these conditions can always be avoided. However, sometimes physical volumes also suffer from other problems which cannot be identified so easily. An example of this might include a non-operational cooling fan in a physical volume, or a damaged sector on the disk. It is, therefore, imperative that physical volumes are monitored regularly, both in terms of their physical environment and their physical characteristics, while they remain in operation.

An important command available to a systems administrator is errpt which can allow physical volume failures, and those impending, from being detected early. Error reporting must be started on the system in order for this command to produce useful information. Different levels of detail can be extracted by using different command line options. Initially, it may only be necessary to extract summary information to see what errors have been reported, how frequently they are occurring, and whether or not they are of a permanent or temporary nature. This can then be followed by a more detailed report to find out their causes. A summary error report can be quickly produced by using the following command:


#errpt | pg

This will produce a one line summary for each error logged on the system with the most recent error first. The fields identifying the error will be:
IDENTIFIER:
A numeric error identifier for the type of error that has occurred.
TIMESTAMP:
This will indicate the date and time the error occurred. The format of this field is MMDDhhmmYY, representing the month, day, hour, minute, and year respectively.
T:
The error type used to identify if the error is permanent (P) or temporary (T).
C:
The error class used to identify if the error is hardware (H), software (S) or operator (O) related.
RESOURCE_NAME:
The name of the resource for which the error is being reported.
DESCRIPTION:
A short description of the error.

The following example shows how the errpt command can be used to produce a more detailed report for each logged error:


# errpt -a | pg

It is worth piping the output through either more or pg if there are many errors logged.

Of all the fields displayed, the most useful in identifying the nature and cause of the error are:

ERROR LABEL:
This is a label used to identify the error. An example of a physical volume error is DISK_ERR4.
Error Class:
This describes if the error is caused by a hardware or software problem denoted by (H) or (S) respectively.
Error Type:
This describes if the error is permanent (PERM) or temporary (TEMP).
Description:
This will be a short description of the error.
Probable Causes:
If included, this field will identify the likely cause of the error which can be a software program or a physical device.
Failure Causes:
This will identify the exact cause of the reported failure.
Recommended Actions
This will describe any recovery or reporting action that will need to be taken.

For more information about how to interpret the output of both the summary and detailed reports, please refer to the errpt command and its associated documentation in InfoExplorer.

Listing Information about Physical Volumes

A physical volume correctly installed on the system can be assigned to a volume group and can subsequently be used to hold file systems and logical volumes. Requirements of logical volumes can vary and sometimes their position within a physical volume can be quite important. So information about free physical partitions and their availability within different sectors on the disk can be very useful. There are several commands which can be used to identify such information pertinent to physical volumes. However, a single command which can provide this is lspv.

Listing Physical Volumes on the System

The lspv command when executed without any arguments will produce output which will identify the physical volume by the name that it is known to the system, the unique physical volume identifier that has been assigned to it and the volume group, if any, to which it belongs. It will appear as:


# lspv
hdisk0 00000310df1bbcef rootvg
hdisk1 00000310df26b596 rootvg
hdisk2 000001856246d451 None

In the above example, hdisk2 does not appear to be allocated to a volume group yet and so the third field reflects this by appearing as None. Physical volumes, hdisk0 and hdisk1, on the other hand, are allocated to volume group rootvg.

Listing Physical Volume Characteristics

The lspv command can also be used to retrieve more detailed information about physical volumes. The command must however be invoked with the name of the disk for which information is required as the argument.

For example:


# lspv hdisk0

If the physical volume being interrogated is currently not allocated to a volume group, no detailed information can be produced about it and an appropriate message will be output to indicate this.

You will note that the information on the left shows detail pertinent to the physical volume itself, whereas that on the right provides detail about the volume group that the physical volume is allocated to. Some of the detail extracted from this output will be discussed in more detail in sections Managing Volume Groups and Managing Logical Volumes, since it is more relevant there.

In brief, the information produced is:

PHYSICAL VOLUME:
The name of the physical volume.
PV IDENTIFIER:
Physical volume identifier unique to the system.
PV STATE:
Availability state of the physical volume for logical I/O. This state can be changed using the chpv command mentioned in section Modifying Physical Volume Characteristics.
STALE PARTITIONS:
The number of physical partitions that are marked stale. Partitions can become stale when the physical volume is temporarily made unavailable while their mirrored copies on other physical volumes change. This will be reviewed in sections Managing Volume Groups and Managing Logical Volumes.
PP SIZE:
The size of the physical partition, set when the volume group is added. This will be reviewed in sections Managing Volume Groups and Managing Logical Volumes.
TOTAL PPs:
The total number of physical partitions that exist on the physical volume.
FREE PPs:
The number of physical partitions available on the physical volume that have not been allocated to file systems or logical volumes.
USED PPs:
The number of physical partitions on the physical volume that have already been allocated to file systems or logical volumes.
FREE DISTRIBUTION:
This lists the number of physical partitions that are available in each of the various regions of the physical volume. The use of this information will be discussed in section Managing Volume Groups and Managing Logical Volumes.
USED DISTRIBUTION:
The same as FREE DISTRIBUTION, except that it lists the number of allocated partitions.
VOLUME GROUP:
The name of the volume group to which the physical volume belongs. This will be reviewed in sections Managing Volume Groups and Managing Logical Volumes
VG IDENTIFIER:
Volume group identifier unique to the system.
VG STATE:
The state of the volume group. This will be reviewed in section Managing Logical Volumes.
ALLOCATABLE:
A yes/no setting to indicate whether or not free physical partitions on this physical volume can be allocated. For more information see the chpv command in section Modifying Physical Volume Characteristics.
LOGICAL VOLUMES:
The number of logical volumes residing on this physical volume.
VG DESCRIPTORS:
The number of Volume Group Descriptor Areas (VGDAs) residing on this physical volume.

Listing Logical Volume Allocation within a Physical Volume

The lspv command can also be used to check how the physical volume is used. It can provide information relating to each logical volume on the physical volume, such as its name, number of logical and physical partitions allocated, distribution across the physical volume, and mount point if one exists.

An example lspv invocation providing this detail is:


# lspv -l hdisk3
hdisk3:
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
lv00 4 4 00..04..00..00..00 /expfs
loglv00 1 1 00..01..00..00..00 N/A

In this example, physical volume hdisk3 has two logical volumes on it, lv00 and loglv00. Logical volume lv00 is allocated 4 logical partitions and 4 physical partitions on this physical volume. All four physical partitions reside in outer-middle region of the disk. The logical volume is used in a file system whose mount point is /expfs. As logical volume loglv00 is not associated with a file system, its mount point is shown as N/A.

Listing Physical Partition Allocation by Physical Volume Region

We have already seen how to retrieve information about the distribution of physical partitions allocated to logical volumes on a particular physical volume. It may sometimes be necessary to check, in more detail, the range of physical partitions allocated to a logical volume and the region of the disk used for those partitions. A file can be repositioned to the center of the disk to ensure that it is accessed more quickly if performance is an issue.

The lspv command invocation providing this level of detail is:


# lspv -p hdisk3
hdisk3:
PP RANGE STATE REGION LV ID TYPE MOUNT POINT
1-15 free outer edge
16-20 free outer middle
21-23 used outer middle lv00 jfs /expfs
24-24 used outer middle loglv00 jfslog N/A
25-27 free outer middle
28-28 used outer middle lv00 jfs /expfs
29-30 free outer middle
31-45 free center
46-60 free inner middle
61-75 free inner edge

From the above example, we note that physical partitions allocated to the file system /expfs (logical volume lv00) are 21 through 23, and 28, and are positioned at the outer-middle region of the disk.

Listing Physical Partition Allocation Table

Although we can determine the range of physical partitions allocated to a logical volume and its distribution, this information does not provide enough detail to determine if a logical volume is allocated a contiguous range of physical partitions. We may require this information if we are considering ways of improving the I/O performance for a logical volume.

The lspv command, with the parameters as shown below, will produce such information.


# lspv -M hdisk3
hdisk3:1-20
hdisk3:21 lv00:3:2
hdisk3:22 lv00:1:2
hdisk3:23 lv00:2:2
hdisk3:24 loglv00:1
hdisk3:25-27
hdisk3:28 lv00:4:2
hdisk3:29-75

The output above consists of the following space separated fields:
PVname:pp[-PP]
PVname being the physical volume name and PP being the physical partition number. The PP number will only be specified as a range when there is more than one free contiguous physical partition on the disk.
LVname [:copy]
LVname being the logical volume name and LP the logical volume partition. The COPY value is also output, if the logical partition is mirrored.

In the above example output, we note that logical volume lv00 has been allocated physical partitions 21, 22, 23, and 28, However, the order in which they are allocated to logical partitions 1, 2, 3, and 4 is 22, 23, 21, and 28 respectively.

Managing Volume Groups

The section will describe the operations which can be performed on volume groups.

Like physical volumes, volume groups can be created and removed, and their characteristics can also be modified. Additional operations such as varying on/varying off and importing/exporting of volume groups can also be performed. This section will describe the operations pertinent to volume groups.

Adding a Volume Group

Before a new volume group can be added to the system, one or more physical volumes, not used in other volume groups, and in an Available state, must exist on the system. Please see section Managing Physical Volumes for more detail.

It is important to decide upon certain information such as the volume group name and the physical volumes to use prior to adding a volume group. Even though it is possible at a later time to change such detail, it may not always be easy nor convenient, and users may even have to be temporarily denied access to the data if the volume groups need to be varied off.

New volume groups can be added to the system by using the mkvg command or using SMIT. Of all the characteristics set at creation time, the following are essentially the most important:

For example:


# mkvg -y myvg -d 10 -s 8 hdisk1 hdisk5

In this example, a volume group with the name myvg is created, using physical volumes hdisk1 and hdisk5, and the physical partition size for this volume group is set to 8KB. Since the volume group is limited to a maximum of 10 physical volumes, eight more can still be added at a later time. The maximum number of physical volumes, 10 in the above example, allowed in a volume group should be given careful consideration since the physical volume space overhead increases with larger numbers.

Volume groups can also be added through SMIT using the command smit mkvg. Limited functionality is provided by the SMIT command. The main differences are:

For a new volume group to be successfully added to the system using the mkvg command, the root file system should have between 1 to 2MB of free space. Check this using the df command. This free space is required because a file is written in the directory /etc/vg each time a new volume group is added. It is also important to note that the -f flag will allow a physical volume which still has a VGDA on it to be allocated to a new volume group. However, the physical volume must not be part of another volume group that is varied on.

Modifying Volume Group Characteristics

Not many changes can be made to the characteristics of a volume group. The changes that are possible are:

Modifying Volume Group Activation Characteristics

The command to allow a volume group to be varied on automatically each time a system is restarted is:


# chvg -ay VGname

In this example, volume group VGname will be varied on automatically each time the system is restarted.

To turn off automatic varying on of a volume group, the following command needs to be executed:


# chvg -an VGname

It may sometimes also be necessary to allow a volume group to remain varied on, even though quorum is lost. In a two-disk volume group, if the physical volume with the two VGDAs is damaged, then the volume group will be varied off since quorum is lost. In AIX Version 4, it is now possible to prevent a volume group from being varied off automatically when quorum is lost and access to data on the good physical volumes still continues.

In the example below, the volume group VGname will remain varied on irrespective of loss of quorum.


# chvg -Qn VGname

The following example command will ensure volume group VGname is varied off after quorum is lost.


# chvg -Qy VGname

The chvg command invoked with the -Q flag will only have effect when the system is restarted. It is important that the boot image is updated after executing the chvg -Qy or chvg -Qn command. This can be done using either the bosboot or savebase command. Failure to do so will not make the change to the volume group and will have no effect when the system is restarted.

Unlocking a Volume Group

In AIX Version 4, it is now also possible to unlock a volume group. A volume group can become locked when an LVM command terminates abnormally. This is quite likely when the system crashes while an LVM operation is being performed on the system or if an LVM command core dumps.

The example command below will unlock volume group, VGname.


# chvg -u VGname

In order for the above command to succeed no other LVM command must be operating on the specific volume group.

Adding a Physical Volume

It may sometimes be necessary to increase the free space available in a volume group so that existing file systems and logical volumes within the volume group can be extended, or new ones can be added. To do this requires additional physical volumes to be made available within the volume group. It is possible to add physical volumes to a volume group, up to the maximum specified at creation time. When adding physical volumes in this way, all data on the physical volume will be destroyed.

A physical volume can be added using the extendvg command. In the following example, physical volume hdisk3 is being added to volume group myvg.


# extendvg myvg hdisk3

The extendvg command will fail if the physical volume being added already belongs to a varied on volume group on the current system. Also, if the physical volume being added belongs to a volume group that is currently not varied on, the user will be asked to confirm whether or not to continue.

Removing a Physical Volume

It may sometimes be necessary to free up one or more physical volumes from a volume group. Suppose that three physical volumes have been allocated to a volume group and only two are actually used for data storage. In this instance, the unused physical volume could be removed from the volume group so that it can be made available for use in other volume groups. It may also be necessary to remove a physical volume if it becomes damaged so that maintenance work can be carried out on it. Whatever the reason, physical volumes can be removed using the reducevg command.

In the following example, physical volume hdisk3 is removed from the volume group myvg:


# reducevg myvg hdisk3

The reducevg command will only succeed in removing a physical volume if:

The reducevg command provides the -d and -f flags. The -d flag is useful since it deallocates all logical partitions and deletes all logical volumes from the specified physical volume upon user confirmation. The -f flag used in conjunction with the -d flag will force the deallocation of logical partitions and deletion of logical volumes without user confirmation.

If the logical volumes on the physical volume specified to be removed also span other physical volumes in the volume group, the removal operation may destroy the integrity of those logical volumes, regardless of the physical volume on which they reside.

Importing and Exporting a Volume Group

There may be times when a volume group may need to be moved from one &rs6k system to another, so that logical volume and file system data in the volume group can be accessed directly on the target system. It may even be necessary to remove all knowledge of a volume group from the system, if file systems and logical volumes within it are no longer being accessed. By having such redundant volume groups on the system. the physical volumes within it remain tied up unnecessarily, when they could be used within other volume groups.

However, before the physical volumes in a volume group are actually disconnected, it would be good practice to remove the system definition of the volume group to which they are allocated. To remove all knowledge of a volume group from the ODM database, the volume group needs to be exported using the exportvg command. This command will not remove any user data in the volume group but only remove its definition from the ODM database. Similarly, when a volume group is moved, the target system needs to be made aware of the new volume group. This can be achieved by importing the volume group using the importvg command which will add an entry to the ODM database.

In the example below, volume group myvg will be exported:


# exportvg myvg

Once exported, a volume group can no longer be accessed.

There are some restrictions when using the exportvg command to export a volume group. These are:

In the following example use of importvg, volume group myvg is being imported onto the target system using hdisk3. The information about the volume group characteristics, such as the other physical volumes in the group and the logical volumes and file systems, will be read from the VGDA held on physical volume hdisk3.


# importvg -y myvg hdisk3

In this example, the name to be given to the imported volume group is specified using the -y flag. However, if the specified volume group name is already in use, the importvg will fail with an appropriate error message, since duplicate volume group names are not allowed. In this instance, the command can be rerun with a unique volume group name specified, or it can be rerun without both the -y flag and the volume group name, which gives the imported volume group a unique system default name. It is also possible that some logical volume names may also conflict with those already on the system. The importvg command will automatically reassign these with system default names.

In AIX Version 4, when a volume group is imported it is automatically varied on, whereas, in AIX Version 3, the volume group has to be varied on separately.

The important thing to remember when moving volume groups from system to system, is that the exportvg command is always run on the source system prior to importing the volume group to the target system. Consider that a volume group is imported on system Y without actually performing an exportvg on system X. If system Y makes a change to the volume group, such as removing a physical volume from the volume group, and the volume group is imported back onto system X the ODM database on system X will not be consistent with the changed information for this volume group.

It is however, worth noting that a volume group can be moved to another system without it first being exported on the source system.

Varying On and Varying Off Volume Groups

Before administrative activities such as opening of logical volumes and mounting of file systems can be performed, the relevant volume groups need to be made available. This can be achieved by varying on a volume group using the varyonvg command. Likewise, when access to a volume group needs to be stopped entirely, it can be varied off after unmounting all file systems and closing all open logical volumes within it. The varyoffvg command can be used to vary off a volume group.

During the varying on process, a number of different operations are performed in order to make a volume group available. They are:

The following example shows how varyonvg can be used to varyon a volume group.


#varyonvg myvg

This command will vary on volume group myvg based on the varying on process mentioned above. If quorum was lost, the volume group myvg would not be varied on. With a number of optional flags specified with the varyonvg command, the default processing can be overridden.

The optional flags are:

-f
Forces the volume group to be varied on even though a majority of VGDAs does not exist. Forcing a volume group to vary on could be quite dangerous, particularly if a damaged physical volume is holding logical partitions of a logical volume which is being updated. This would cause corruption of the data.
-n
Disables the synchronization of the stale physical partitions within the volume group. This allows flexibility to the systems administrator in providing control over how the volume group can be recovered.
-p
This permits a volume group to be varied on only when all the physical volumes in the volume group are available.
-s
Allows a volume group to be varied on in system maintenance mode. In this mode no logical volumes can be opened, thereby disallowing all logical I/O to logical volume and file system data. Since logical volume commands can still be run on the volume group, it provides a mechanism for looking at and resolving any problems that may occur on it.

To vary off a volume group, the following command can be issued:


# varyoffvg myvg

Before a volume group can actually be varied off all open logical volumes must be closed and all mounted file systems must be unmounted. If a volume group exhibits some problems and needs to be repaired, this can be done by varying off the volume group directly into maintenance mode. This can be achieved by using the -s flag.

Monitoring Volume Groups

Volume groups rely on the underlying physical volumes to be operational the whole time that they are activated. If, however, physical volumes become damaged, they can affect the state of volume groups. Therefore, like physical volumes, it is important that volume groups are also monitored regularly so that extensive damage can be avoided. This section will review those AIX logical volume commands which will help in monitoring volume groups and their characteristics.

Listing Volume Groups on the System

Although there are several AIX commands available to find out about the volume groups on a system, the most preferred is lsvg. This command interrogates the ODM database for all volume groups currently known to the system.


**** Note **** A volume group which has been exported using the exportvg command will not appear in the output.

An example use of the lsvg command and its output is:


# lsvg
rootvg
myvg

Since the above command lists all known volume groups it may sometimes be desired to list only those volume groups which are currently varied on.

Using the -o flag with lsvg will provide this detail.

For example:


lsvg -o
rootvg

Listing the Characteristics of a Volume Group

A volume group has many characteristics which can be observed, such as the physical partition size for it, the number of physical volumes it consists of, how much free and used space there is and more. It may be essential to observe how much free space there is within a volume group to help decide whether or not a logical volume or file system can be extended by a particular amount, or even if a new logical volume or file system can be created with the required size. Apart from the free space, it may also be helpful to find if the varied on volume group shows any problems with regards to the physical volumes and physical partitions. If a volume group needed to be varied on forcibly, this could be attributed to a physical volume not having valid VGDA information on it. There could also be stale physical partitions within a volume group, particularly if mirrored copies of logical volumes on damaged physical volumes are not updated. It is possible to see such information at a glance, about any varied on volume group, by issuing the lsvg command as follows:


# lsvg myvg
VOLUME GROUP: myvg VG IDENTIFIER: 00000446f5eac0e3
VG STATE: active PP SIZE: 4 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 574 (2296 megabytes)
MAX LVs: 256 FREE PPs: 571 (2284 megabytes)
LVs: 2 USED PPs: 3 (12 megabytes)
OPEN LVs: 0 QUORUM: 2
TOTAL PVs: 2 VG DESCRIPTORS: 2
STALE PVs: 1 STALE PPs 1
ACTIVE PVs: 1 AUTO ON: yes

In this example, volume group myvg is being described.

The meaning of the fields in the above output and their values as found in the this example will be explained.

VOLUME GROUP:
This is the name of the volume group. In this example it is myvg.
VG STATE:
This describes if the volume group is varied on or varied off. In our example, the content of this field is active indicating that the volume group is varied on. This field can have any one of the following values:
  • active/complete - varied on and all physical volumes active
  • active/partial - varied on but one or more physical volumes are inactive
  • inactive - varied off
VG PERMISSION:
This describes if the volume group is accessible with read-only permission or both read and write permission. The volume group in this example has read and write access permission.
MAX LVs:
This field represents that maximum number of logical volumes that can be created within a volume group which is 256.
LVs:
This represents the number of logical volumes that have so far been created within the volume group. In our example, only two logical volumes exist within the volume group myvg.
OPEN LVs:
This describes the number of logical volumes that are currently open for logical I/O. In the above example, there are no logical volumes currently open.
TOTAL PVs:
The total number of physical volumes that exist within the volume group. In volume group myvg there are two physical volumes.
STALE PVs:
The number of inactive physical volumes in the volume group. The example appears to have one physical volume which is inactive. This indicates there is a problem with one of the physical volumes in the volume group.
ACTIVE PVs:
The number of active physical volumes within the volume group. Volume group myvg has one active (working) physical volume.
VG IDENTIFIER:
This field shows the system wide unique alphanumeric identifier for the volume group. In the example, this value is 00000446f5eac0e3.
PP SIZE:
A numeric value representing the size, in megabytes, of each physical partition within the volume group. This value is specified when the volume group is created. The example volume group uses the default physical partition size of 4MB.
TOTAL PPs:
This field shows the total number of physical partitions which exist in the volume group. It also shows, in brackets, the size of the volume group which is calculated using the physical partition size. Volume group myvg has 574 physical partitions allocated.
FREE PPs:
This field shows the amount of unallocated space in the volume group in terms of physical partitions. The size in megabytes, is also shown in brackets. The number of free physical partitions in the above example is 571.
USED PPs:
This field shows the number of used physical partitions. The format of the contents of this field is the same as for the two previous fields. In the above example, only 3 physical partitions (12MB) have been used.
QUORUM:
This field represents the number of physical volumes that would be needed to represent a majority. For a two disk volume group, this number represents the number of VGDAs, rather than physical volumes, which would be required to maintain quorum (a majority).
VG DESCRIPTORS:
This value represents the number of VGDAs currently available in the varied on volume group. In the example volume group myvg, there appears to be 2 VGDAs available. The example volume group consists of two physical volumes and so there should really be 3 VGDAs available. From the above output it can clearly be seen that there is a problem accessing the third VGDA.
STALE PPs:
This field represents the number of stale physical partitions. The example shows 1 stale physical partition. This is likely since one of the physical volumes is currently inactive.
AUTO ON:
This field describes if the volume group will be varied on automatically at system restart. This characteristic can be changed using the chvg command. Volume group myvg, in the above example, will be varied on automatically each time the system is rebooted.

Listing the Logical Volumes in a Volume Group

The lsvg command can be used to list all the logical volumes in a varied on volume group. To do so the -l flag needs to be specified together with the volume group name.

For example:


# lsvg -l myvg
myvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
mylv jfs 6 12 2 open/syncd /myjfs

The above command provides details of each logical volume on a separate line. The information includes the following:

LV NAME:
This is the name of the logical volume.
TYPE:
This field describes the type of logical volume it is. This can be any one of the following special types: If a user defined logical volume type has been specified, this field will reflect this.
LPs:
This will be the number of logical partitions allocated to the logical volume.
PPs:
This will be the number of physical partitions allocated to the logical volume. If a logical volume has mirrored copies, this number will be the LPs value multiplied by the number of mirrored copies.
PVs:
This represents the number of physical volumes across which the physical partitions are spread.
LV STATE:
The state of the logical volume specified as any one of the following:
MOUNT POINT:
This is the mount point of a file system if one exists. If a file system has not been added to a logical volume the entry will appear as N/A.

Listing Physical Volume Status within a Volume Group

So far we have seen how the lsvg command can be used to list the volume groups, their characteristics in detail and also information about the logical volumes which have been created on them. The lsvg command can also be used to extract information about the physical volumes that exist within a volume group. To view this information the -p flag needs to be used.

For example:


# lsvg -p myvg
myvg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk0 active 287 267 58..37..57..57..58
hdisk2 active 287 280 58..50..57..57..58
hdisk3 active 287 280 58..50..57..57..58

For each physical volume identified for the volume group, the following information is provided:

PV_NAME:
The name of the physical volume.
PV STATE:
Indicates whether or not the physical volume is active.
TOTAL PPs:
The number of physical partitions that exist on the physical volume in question.
FREE PPs:
The number of physical partitions on the physical volume that have so far not been allocated to a logical volume or file system.
FREE DISTRIBUTION:
The distribution of unallocated physical partitions on the physical volume over specific regions of the disk. The regions being:

It is useful to view the distribution of free (unallocated) physical partitions, according to regions of the disk. This would be very beneficial, particularly when deciding for the placement of logical volumes or file systems for fast access. It can provide useful information about the placement of existing logical volumes and file systems, and can help in determining if a reorganization of the logical volumes is required, so that free contiguous physical partitions can be made available for other allocation requests.

Managing Logical Volumes

Physical volumes and volume groups are normally not addressed directly by users and applications to access data, and they cannot be manipulated to provide disk space for use by users and applications. However, logical volumes provide the mechanism to make disk space available for use, giving users and applications the ability to access data stored on them.

Logical volumes need to be managed on a day-to-day basis, and this section will highlight those management issues relating to logical volumes, and why they should be given important consideration. The areas to be covered will be:

Adding a Logical Volume

In order to provide users the ability to store and retrieve data on the disk, logical volumes need to be added to a volume group on the system. However, before a logical volume is actually created, certain characteristics about it, such as its size, physical partition placement policy, and the volume group in which it should belong need to be specified. These characteristics can be better determined by understanding the needs of the users and applications that will utilize the logical volume.

The command which will add a logical volume to a volume group is mklv. An example of this command is:


# mklv -y mylv -c 2 myvg 10

The above example command will create a logical volume mylv in the volume group myvg. The logical volume will be allocated 10 logical partitions and each logical partition will consist of 2 physical partitions.

The two vital pieces of information that are mandatory when creating a logical volume are:

  1. The number of logical partitions
  2. The name of the volume group to which it will belong

Many different characteristics for the logical volume can be set at creation time using the mklv command. In AIX Version 4, since it is possible to create striped logical volumes, the mklv command has been updated accordingly. For more information about the use of mklv and its flags, please refer to the InfoExplorer hypertext documentation.

Removing a Logical Volume

Under different circumstances, logical volumes may need to be removed from a volume group. Consider that a logical volume is no longer used for storage purposes by users and applications. The data within the logical volume could be backed up and the space occupied by the logical volume could be freed by removing the logical volume from the volume group. There may even be times when a logical volume may need to be removed because it has more than the required number of logical partitions allocated. In this instance, the following steps could be performed to free up the excess logical partition allocation:

The resulting free space could be put to better use by allocating it to other logical volumes requiring it. Whatever the reason, a logical volume can be removed by using the rmlv command. An example use of this command is:


# rmlv mylv

This command will remove the logical volume mylv from the system. The command will appropriately remove all knowledge of the logical volume from the:

It is also possible to remove all logical partitions on a particular physical volume by using the rmlv command. For example:


# rmlv -p hdisk4 mylv

This command will remove copies of all logical partitions for the logical volume mylv residing on the physical volume hdisk4.


**** Warning **** If the logical partitions being removed are the only ones remaining for the logical volume, this command will also remove the logical volume from the system.

Since by default, the rmlv command will perform its task requesting user confirmation, the -f flag is provided to override this.

Increasing the Size of a Logical Volume

Over time, users and application needs for available disk space will definitely grow, and for this reason, the size of logical volumes will also need to be increased. This can be achieved by using the extendlv command. However, there must be sufficient free (unallocated) physical partitions available within the volume group to satisfy the operation.

For example:


# extendlv mylv 10

This will extend the logical volume mylv by 10 logical partitions using the available free space in the volume group.

Certain rules need to be adhered to when using the extendlv command to extend striped logical volumes. For more information about the use of the extendlv command and its flags, please refer to the InfoExplorer hypertext documentation.

Copying a Logical Volume

Logical volumes may need to be copied for a number of reasons. If a disk is to be removed and replaced by a faster one, the logical volumes on that disk will need to be copied to the new disk. Logical volumes can be copied to new logical volumes or to existing logical volumes which are then over-written.

In order to copy a logical volume, use the cplv command, as in the following example:


# cplv -v myvg -y newlv oldlv

This copies the contents of oldlv to a new logical volume called newlv in the volume group myvg. If the volume group is not specified, the new logical volume will be created in the same volume group as the old logical volume. This command creates a new logical volume. The following example demonstrates how to copy a logical volume to an existing logical volume.


# cplv -e existinglv oldlv

This copies the contents of oldlv to the logical volume existinglv. Confirmation for the copy will be requested as all data in existinglv will be over-written.


**** Warning ****

If existinglv is smaller than oldlv, then data will be lost, probably resulting in corruption.


Copying a logical volume can also be done through smit using the smit cplv fastpath.

Migrating and Reorganizing Logical Volumes

As the uses of existing logical volumes change, sooner or later there will be a requirement to modify the placement some logical volumes to alter the performance characteristics. There are two commands that can assist in this process:

Listing a Logical Volume

All of the attributes defined for a logical volume can be listed using the lslv command as follows:


# lslv mylv
LOGICAL VOLUME: mylv VOLUME GROUP: myvg
LV IDENTIFIER: 00013948b0189961.7 PERMISSION: read/write
VG STATE: inactive LV STATE: opened/syncd
TYPE: jfs WRITE VERIFY: off
MAX LPs: 500 PP SIZE: 4 megabyte(s)
COPIES: 1 SCHED POLICY: parallel
LPs: 109 PPs: 109
STALE PPs: 0 BB POLICY: relocatable
INTER-POLICY: minimum RELOCATABLE: yes
INTRA-POLICY: center UPPER BOUND 32
MOUNT POINT: /myfs LABEL: /myfs
MIRROR WRITE CONSISTENCY: on
EACH LP COPY ON A SEPARATE PV ?: yes

The fields displayed have the following meanings:

LOGICAL VOLUME:
The name of the logical volume.
VOLUME GROUP:
The name of the volume group that the logical volume is in.
LV IDENTIFIER:
The system unique identifier for the logical volume.
PERMISSION:
The access permission, which can be read-only, or read-write.
VG STATE:
The current state of the volume group. This can be one of:
  1. active/complete - all physical volumes are active.
  2. active/partial - not all physical volumes are active.
  3. inactive - the volume group is not active.
LV STATE:
The current state of the logical volume. This can be one of:
  1. opened/stale - logical volume is open, but some physical partitions do not contain current information.
  2. opened/syncd - logical volume is open and synchronized.
  3. closed - logical volume has not been opened.
TYPE:
The type of the logical volume (JFS for example).
WRITE VERIFY:
Whether write verify is being used or not.
MAX LPs:
The maximum number of logical partitions that the logical volume can contain.
PP SIZE:
The size of the physical partitions in the logical volume.
COPIES:
The number of copies of the logical volume that exist.
SCHED POLICY:
Whether writes are to be scheduled serially, or in parallel to disk.
LPs:
The current number of logical partitions in the logical volume.
PPs:
The current number of physical partitions in the logical volume.
STALE PPs:
The number of physical partitions in the logical volume that do not contain current information.
BB POLICY:
Whether bad block allocation is to be used or not for this logical volume.
INTER-POLICY:
Whether the maximum or minimum range of disks should be used for logical partition allocation for this logical volume.
RELOCATABLE:
Whether partitions can be relocated if a reorganization occurs.
INTRA-POLICY:
Specifies the preferred location for physical partitions on the disk. This can be edge, middle, or center.
UPPER BOUND:
This indicates the maximum number of physical volumes within the volume group that can be used for physical partition allocation.
MOUNT POINT:
If this logical volume contains a file system, then this indicates the mount point for that file system.
MIRROR WRITE CONSISTENCY:
Whether writes are cached to help ensure consistency between mirrored copies.
EACH LP COPY ON A SEPARATE PV ?:
Whether the allocation policy is strict meaning that logical volume copies will be placed on separate physical volumes if possible.

Listing a Summary of a Logical Volume Allocation

If a summary of the physical partition usage for a logical volume is required, rather than a complete listing of all attributes, the following command can be used:


# lslv -l mylv
mylv:/myfs
PV COPIES IN BAND DISTRIBUTION
hdisk0 107:000:000 27% 019:004:029:032:023
hdisk1 002:000:000 100% 000:000:002:000:000

The fields shown in the output above have the following meaning:

PV
The physical volume name.
COPIES
This field has the following three sub-fields:
IN BAND
This shows the percentage of physical partitions that could be allocated according to the intra-physical allocation policy.
DISTRIBUTION
This shows for each disk region, the number of physical partitions allocated to logical volumes.

Reading the VGDA on a Physical Volume

If it is required to interrogate the VGDA on the physical disk in order to find the status of a logical volume, the following command can be used:


# lslv -n mypvid mylv

This will retrieve status information similar to that produced by the lslv mylv, but from the VGDA, rather than the ODM.

Managing the Storage Environment

Managing storage is about optimizing the environment for the requirements of the processes that will be using the subsystems within it. This will involve the following considerations:

Management of the environment also involves ensuring recovery is possible in the event of user errors or hardware and software failures. The key to this is the development and implementation of a good backup strategy, and this will also be discussed. Mechanisms for backing up the system and their capabilities have changed somewhat from AIX Version 3 to AIX Version 4, and both environments will be examined.


**** Note ****

Please read this section in conjunction with Storage Subsystem Design, as there are many other considerations involved in maximizing performance, availability and disk utilization.


Disk Space and Performance/Availability Management

This section will look at the management issues inherent in controlling performance, availability, and disk space utilization.

Managing Performance

In order to maximize the performance of a disk subsystem, certain options can be taken at logical volume and file system creation time. In addition, existing logical volumes can be modified to increase performance, and volume groups can be reorganized to improve performance.

Creating Logical Volumes and File Systems for Performance

In order to maximize performance create logical volumes as follows; the smit menus for the creation will be shown in this section. For changing logical volume characteristics, the commands will be shown. Either approach is valid, and for a detailed discussion of the commands involved, see Storage Management Files and Commands Summary, or the InfoExplorer documentation.


# smit mklv

This starts smit in the process for adding a new logical volume.


                              Add a Logical Volume

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[TOP] [Entry Fields]
[perflv]
* VOLUME GROUP name datavg
Number of LOGICAL PARTITIONS [25] #
PHYSICAL VOLUME names [hdisk8 hdisk1]+
Logical volume TYPE [jfs]
POSITION on physical volume center +
RANGE of physical volumes maximum +
MAXIMUM NUMBER of PHYSICAL VOLUMES [2] #
to use for allocation
Number of COPIES of each logical 1 +
partition
Mirror Write Consistency? no +
Allocate each logical partition copy no +
on a SEPARATE physical volume? Performance
RELOCATE the logical volume during reorganization? yes +
Logical volume LABEL []
MAXIMUM NUMBER of LOGICAL PARTITIONS [128]
Enable BAD BLOCK relocation? yes +
SCHEDULING POLICY for writing logical parallel +
partition copies
Enable WRITE VERIFY? no +
File containing ALLOCATION MAP []
Stripe Size? [Not Striped] +
[BOTTOM]

F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

This creates a logical volume with the following characteristics:

PHYSICAL VOLUME names:
This field contains the names of the physical volumes that are to be used for the physical partitions of the logical volume being created.
POSITION on physical volume:
This field specifies the desired location of the physical partitions on the disk. Center is chosen for optimum performance. The LVM will attempt to locate free center partitions on the disks specified previously; if not available, middle partitions, then edge partitions will be selected.
RANGE of physical volumes:
This parameter governs the way in which the LVM will allocate partitions on the physical volumes specified above. Maximum constrains the LVM to allocating partitions across as many of the physical volumes as possible.
Number of COPIES of each logical partition:
This field controls the level of mirroring. Set to 1 implies no mirroring.
Mirror Write Consistency:
Only valid if mirroring.
Allocate each logical partition copy on a SEPARATE physical volume?
Only valid if mirroring.
RELOCATE the logical volume during reorganization?
This allows the physical partitions to be moved during reorganization if required. This can be useful if performance requirements change.
SCHEDULING POLICY for writing logical partition copies:
Only valid for mirroring
Enable WRITE VERIFY:
This parameter should be set to no to prevent the extra disk rotation required for verification.
File containing ALLOCATION MAP:
It is possible to override the LVMs allocation policies and provide a file containing the physical partition locations required. An example of this can be found in Map Files Usage and Contents.
Stripe Size?
Striping is discussed later in this section.

Having created the logical volume, a file system must next be created within it:


# smit crfs

Select the option to Add a Journaled File System on a Previously Defined Logical Volume.


       Add a Journaled File System on a Previously Defined Logical Volume

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[Entry Field]
* LOGICAL VOLUME name perflv +
* MOUNT POINT [/tmp/nick]
Mount AUTOMATICALLY at system restart? yes +
PERMISSIONS read/write +
Mount OPTIONS [] +
Start Disk Accounting? no +
Fragment Size (bytes) 4096 +
Number of bytes per inode 4096 +
Compression algorithm no +






F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

This will create a file system with the following characteristics:

Fragment Size (bytes):
This parameter controls the size of the basic unit of allocation at the file system level. Setting this to 4096 creates fragments of the largest size possible, thereby minimizing the overhead involved and maximizing performance.
Number of bytes per inode:
This parameter governs the number of i-nodes actually created per number of bytes in the file system. This will have no specific effect on performance.
Compression algorithm:
To maximize performance, do not use compression.
Modifying Logical Volumes for Performance

In order to maximize the performance of an existing logical volume do the following:


# chlv -a c LVname

This command will change the intra-disk physical allocation policy to use the center of the disk if possible. In order for existing partitions to take advantage of the new policy, the volume group will need reorganizing. This is discussed in the next section. Partitions added if the logical volume is extended will be allocated using the new policy. LVname should be the name of the logical volume that is to be changed.


# chlv -e x LVname

This command will change the inter-disk physical allocation policy to use the maximum number of disks possible within the volume group, for allocating further physical partitions. Again, reorganization will be required if the existing partitions that comprise the logical volume are to take advantage of this. LVname should be the name of the logical volume that is to be changed.

Reorganizing Volume Groups for Performance

In order to reorganize a volume group after policies have been changed for logical volumes within that group, the following command should be executed:


# reorgvg VGname LVname_1 LVname_2 LVname_3 ...

This command will instruct the LVM to attempt to reshuffle the physical partition allocations within the volume group VGname, in order to satisfy as far as possible the policy requirements of the logical volumes specified in the list (LVname_1, LVname_2, and LVname_3 in this example). The LVM will try and implement the policies for logical volumes in the order specified. In this example, LVname_1s allocation will take precedence over LVname_2.

Determining which logical volumes are in a volume group can be achieved using the lsvg command as follows:


# lsvg -l VGname
VGname:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
datalog jfslog 1 1 1 open/syncd N/A
datapg paging 5 10 2 closed/syncd N/A
perflv jfs 25 25 2 closed/syncd /tmp/nick
datalv4 jfs 10 10 1 closed/syncd /datajfs
#

Using Striping

Further performance enhancement is possible by setting up a logical volume to use striping. An example of this procedure can be found in How to Create a Striped Logical Volume.

Managing Availability

In order to maximize availability, there are certain options that can be selected at logical volume creation time. Existing logical volumes can also be modified to increase availability, and both possibilities will be examined.

Creating Logical Volumes for Availability

In order to maximize availability, create logical volumes as follows. Logical volume creation will be shown using smit, the commands are actually detailed in Storage Management Files and Commands Summary, and documented in InfoExplorer. There are no particular availability related options during file system creation, though as has been mentioned previously, having a journaled file system itself provides enhanced availability through journaling.


# smit mklv

This starts smit in the process for adding a new logical volume.


                              Add a Logical Volume

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[TOP] [Entry Fields]
Logical volume NAME [availlv]
* VOLUME GROUP name datavg
Number of LOGICAL PARTITIONS [25] #
PHYSICAL VOLUME names [hdisk8 hdisk1 h]+
Logical volume TYPE [jfs]
POSITION on physical volume center +
RANGE of physical volumes minimum +
MAXIMUM NUMBER of PHYSICAL VOLUMES [1] #
to use for allocation
Number of COPIES of each logical 3 +
partition
Mirror Write Consistency? yes +
Allocate each logical partition copy yes +
on a SEPARATE physical volume?
RELOCATE the logical volume during reorganization? yes +
Logical volume LABEL []
MAXIMUM NUMBER of LOGICAL PARTITIONS [128]
Enable BAD BLOCK relocation? yes +
SCHEDULING POLICY for writing logical sequential +
partition copies
Enable WRITE VERIFY? yes +
File containing ALLOCATION MAP []
Stripe Size? [Not Striped] +
[BOTTOM]

F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

This creates a logical volume with the following characteristics:

PHYSICAL VOLUME names:
This parameter specifies the physical volumes within the volume group that should be used to hold the physical partitions that will be created. In this case, there are 3 physical volumes specified, as there will be 3 copies of the logical volume, and for maximum availability, each copy should be on a separate disk.
POSITION on physical volume:
There are no particular availability advantages to be gained from whereabouts on the disk physical partitions are located. In this case, center has been chosen to improve performance.
RANGE of physical volumes:
This parameter governs how many physical volumes the LVM will attempt to use when creating physical partitions for each copy. Setting the value to minimum instructs the LVM to use as few physical volumes as is possible.
Number of COPIES of each logical partition:
This parameter specifies the degree of mirroring that will be implemented. Setting the value to 3 provides maximum availability with two redundant copies of the data existing.
Mirror Write Consistency?
This parameter controls whether the LVM will cache logical partitions until all copies of the partition have been updated. Setting this to yes enhances availability by ensuring consistency between mirrored copies.
Allocate each logical partition copy on a SEPARATE physical volume?
This parameter specifies whether copies should be allowed to share physical volumes. For maximum availability, it should be set to yes.
RELOCATE the logical volume during reorganization?
This parameter governs whether the LVM will be allowed to move physical partitions belonging to this logical volume during a reorganization. Set this to yes if there may be a requirement to modify the policies controlling this logical volume.
SCHEDULING POLICY for writing logical partition copies:
This parameter controls how copies will be written to disk. For maximum availability, this should be set to sequential, which ensures each write to a copy must complete before the next occurs, thereby maximizing the probability of a successful copy being made.
Enable WRITE VERIFY?
This parameter toggles the write verification feature. For maximum availability, it should be set to yes.
File containing ALLOCATION MAP:
As described in the previous example, this parameter allows the location of physical partitions on the physical volumes to be directly controlled by the user. As physical location of partitions on each disk is not an availability issue, this feature is not required.
Stripe Size?
This should be set to Not Striped, as striping cannot be used with mirroring.
Modifying Logical Volumes for Availability

In order to maximize availability for an existing logical volume, do the following. It will be necessary to check that enough physical partitions exist on the selected physical volumes to support the new number of mirror copies required. This can be achieved using the lsvg command as shown here:


# lsvg -p VGname
VGname:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk8 active 75 18 15..00..00..00..03
hdisk1 active 287 206 58..09..24..57..58
#

This command shows the physical volumes in the volume group, and the free partitions available, both in total, and in actual location.


# mklvcopy -e m -u 1 -s y -k LVname 3 hdiskX hdiskY ...

This modifies logical volume LVname as follows:

-e m
This flag sets the inter-disk physical policy, causing the LVM to use the minimum number of disks for future partition allocations to LVname
-u 1
This flag sets the maximum number of physical volumes to be used in each new allocation of partitions. Setting this to 1 means use the minimum possible.
-s y
This flag instructs the LVM to use a different physical volume for each new copy of the logical volume. This ensures maximum availability by placing each copy on a separate physical disk.
-k
This flag instructs the LVM to synchronize the data in the newly created copies.
LVname 3
LVname is the name of the logical volume to be modified, and the numeral following it, indicates the new number of copies of each logical partition required (the level of mirroring). For maximum availability, set this to 3.
hdiskX hdiskY ...
The last part of this command should be a list of the physical volumes that are to be used for the updated logical volume. The values put in here should be the names of the physical disks. Using the smit menus for this operation provides a prompt for the names of existing physical volumes in the volume group. Alternatively use the lsvg command, as detailed above, to list physical volumes in a volume group.

Next, the write verify and scheduling policies should be modified:


# chlv -d s -v y -w y LVname

This changes the policies for LVname as follows:

-d s
This flag sets the scheduling policy for writing logical partitions to sequential. This is explained in the previous section on creating the logical volume for availability.
-v y
This flag sets the write verification feature on. Again the purpose behind this is explained in the previous section.
-w y
This flag enables mirror write consistency, which is also explained in the previous section.
Mirroring the Root Volume Group

An example of the process of mirroring the root volume group is shown in rootvg Mirroring - Implementation and Recovery.

Reorganizing Volume Groups for Availability

This procedure is exactly the same as has already been described in the previous section on performance.

Managing Disk Space Utilization

Maximizing disk space utilization is possible through configuration of the journaled file system at creation time. As has already been described in File Systems, the primary configuration options are the fragment size, the number of bytes per i-node, and whether compression will be used or not. This section will show the smit and system commands used to create a file system, highlighting those parameters important in disk space management.

To create a file system do the following:


# smit crjfs

Select the volume group that will contain the file system and then:


                          Add a Journaled File System

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[Entry Fields]
Volume group name datavg #
* SIZE of file system (in 512-byte blocks) [20000]
* MOUNT POINT [/tmp/nick] +
Mount AUTOMATICALLY at system restart? no +
PERMISSIONS read/write +
Mount OPTIONS [] +
Start Disk Accounting? no +
Fragment Size (bytes) 512 +
Number of bytes per inode 512 +
Compression algorithm LZ +





F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

This is the same as using the following system command:


# crfs -v jfs -g datavg  -a size=20000 -m /tmp/nick -A no  -p rw  -t no \
-a frag=512 -a nbpi=512 -a compress=LZ

This will create the file system as follows:

-a frag=512:
The same as Fragment Size (bytes), this parameter sets the size of the minimum allocation unit within the file system.
-a nbpi=512:
The same as Number of bytes per inode, this parameter controls how many i-nodes will be created in the file system. As each i-node takes up 128 bytes of physical space, they can use up a great deal of disk.
-a compress=LZ:
The same as compression algorithm, this option allows a compression type to be selected. By default, the system provides the LZ mechanism. Utilizing compression can improve disk space usage (depending upon the file data) by up to a factor of 2.

Backup and Restore Management

This section will show how to use the smit menus and system commands available to backup and restore both system and user information.

Backups


****
Note ****

Prior to any file system backup, run the fsck command to ensure file system consistency.


Backing Up User Files or File Systems

****
Warning **** Do not attempt to back up mounted file systems, as this may result in inconsistencies in the backed up copy. This warning is not valid for the root file system which is discussed in the next section.

Using the smit menus to effect these backups will not present the full range of backup options, as this would become unnecessarily complicated, and negate the purpose of smit (ease of use). To backup user files or file systems using smit enter the following:


# smit backfile

This starts smit in the process for backing up files or directories by name:


                           Backup a File or Directory

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[Entry Fields]
This option will perform a backup by name.
* Backup DEVICE [/dev/fd0] +/
* FILE or DIRECTORY to backup [.]
Current working DIRECTORY [/u/nickh] /
Backup LOCAL files only? yes +
VERBOSE output? no +
PACK files? no +





F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

This will cause the backup command to be executed as follows:


# cd /u/nickh ; find . -fstype jfs -print | backup -iq -f /dev/fd0

Essentially, this changes directory to the required starting point, locates all specified files, and passes them to the backup command which is implemented using the following parameters:

-i
This flag causes a backup by name
-q
This flag indicates that the backup medium (in this case the diskette drive) is ready to use, and a prompt is not required.
-f /dev/fd0
This flag indicates which device should be used for output, in this case the diskette drive. The smit menu option provides a prompt for device selection here.

After executing this command, the specified files will have been copied to the requested device, assuming the device was ready and capable of executing the request, and the files and/or directories specified could be found.

To back up user file systems, do the following, ensure that the file systems to be backed up are unmounted first:


# umount FSname
# smit backfilesys

This unmounts file system FSname, and starts smit in the process for backing up a file system. If a message is returned to the effect that the file system is busy, then someone is currently using the file system.


                              Backup a Filesystem

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[Entry Fields]
This option will perform a backup by inode.
* FILESYSTEM to backup [/u] +/
* Backup DEVICE [/dev/fd0] +/
Backup LEVEL (0 for a full backup) [0] #
RECORD backup in /etc/dumpdates? no +





F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

This generates the system command shown below:


# backup -f /dev/fd0 -0 /u

By default, the backup command performs a backup by i-node.

-f
This flag specifies the device to use for the backup. In this case the diskette device.
-0
This flag specifies the backup level. Level 0 is a full backup, levels 1 to 9 are incremental backups. In an incremental backup, only those files that have changed since the last backup at or below that level are backed up.
/u
The last part of the command indicates which file system to actually backup.

**** Warning ****

Any files with UID or GID greater than 65535 will not be backed up properly as the UID and GID will be truncated to two bytes. Therefore they will be restored with invalid UID and GID. This is only true for backup by i-node.


Backing Up the System Image Including User Volume Groups

This section will show how to use smit menus and system commands to back up the operating system volume group and user volume groups.

To back up the root volume group, ensure that all root volume group file systems that require backing up are mounted, then do the following:


# smit mksysb

This will start smit in the process for creating an installable backup of the operating system (rootvg).


                               Back Up the System

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[Entry Fields]
WARNING: Execution of the mksysb command will
result in the loss of all material
previously stored on the selected
output medium. This command backs
up only rootvg volume group.

* Backup DEVICE or FILE [/dev/rmt0] +/
Make BOOTABLE backup? yes +
(Applies only to tape media)
EXPAND /tmp if needed? (Applies only to bootable no +
media)
Create MAP files? no +
EXCLUDE files? no +
Number of BLOCKS to write in a single output [] #
(Leave blank to use a system default)

F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

This will cause the following system command to be executed:


# mksysb -i /dev/rmt0

The command executed is much more complex, as it attempts to check various prerequisites, and adjust space if required. The command shown will create a bootable image of the system on the tape device specified (/dev/rmt0), but may fail if there is insufficient space in /tmp.

-i
This flag causes the generation of the /image.data file that contains important install information on all the volume groups, logical volumes, file systems, paging spaces, and physical volumes.
/dev/rmt0
The last part of this command specifies the output device to use. In order for a bootable image to be created, this must be a tape device.
MAP Files:
Using map files from the smit menu ensures that physical partitions are allocated exactly as they were in the original, when the backup is installed.

For a non-bootable backup of the operating system volume group, or for a backup of a user volume group, do the following:


# smit savevg

This starts smit in the process for backing up a volume group:


                             Back Up a Volume Group

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[Entry Fields]
WARNING: Execution of the savevg command will
result in the loss of all material
previously stored on the selected
output medium.

* Backup DEVICE or FILE [/dev/rmt0] +/
* VOLUME GROUP to back up [datavg] +
Create MAP files? no +
EXCLUDE files? no +
Number of BLOCKS to write in a single output [] #
(Leave blank to use a system default)



F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

This generates the following system command:


# savevg -i -f /dev/rmt0 VGname

If it is required to change the sizes of file systems, so that at restore of the volume group, wasted space has been cut down, the following command should be run prior to the savevg command:


# mkvgdata -m VGname

This will cause map files to be created for the volume group. The file /tmp/vgdata/VGname/VGname.data can then be edited to alter the size of any file systems in the volume group to that required. If this is done, the savevg command must be executed without the -i or -m flags as these will cause the changes to be overwritten. The savevg command should then be executed as follows:


# savevg -mf/dev/rmt0 VGname

-i
This causes the generation of the image.data file mentioned in the section on backing up the system volume group.
-m
This flag causes map files to be written with the backup data to enable the exact replication of physical partition location upon restore.
-f /dev/rmt0
This flag specifies the device to be used for the backup, in this case the tape device at rmt0.
Implementing Scheduled Backups

Implementing scheduled backups at the operating system level involves using a combination of the backup commands discussed already, and the system scheduler cron, to provide a basic automatic backup. More sophisticated backup scheduling and control is possible using script files to execute more complex functions such as checking file systems prior to backup, checking for error conditions, and unmounting file systems prior to execution. The highest level of control available can be found in higher level tools as described in Higher Level Storage Management Products.

For examples of this simple level of scheduling automatic backups, see InfoExplorer documentation, in particular the article "Implementing Scheduled Backups".

Restores

This final section will look at some of the smit menus and system commands available to restore backed up information.

Restoring Individual User Files

Restoring individual files that have been accidentally erased requires locating the backup medium on which they were stored. This can be time consuming and involves using the following command to search the backup archives:


# restore -T -f /dev/rmt0

This will list the contents of the backup archive on device rmt0. Alternatively, the -i flag can be used which will interactively prompt for which files and directories to restore.


**** Note ****

It is a good idea to restore files initially to the /tmp directory to avoid overwriting information accidentally.


In order to restore from a complete level 0 backup of files or directories, do the following:


# smit restfile

This will start smit in the process for restoring files or directories:


                          Restore a File or Directory

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[Entry Fields]
* Restore DEVICE [/dev/fd0] +/
* Target DIRECTORY [.] /
FILE or DIRECTORY to restore []
(Leave blank to restore entire archive.)
VERBOSE output? no +
Number of BLOCKS to read in a single input [] #
operation




F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

This will execute the following system command:


# cd . ; restore -xdq -f /dev/fd0

This will change directory to the target directory, and then restore all files from the backup media specified by the -f flag into it.

-x
This flag causes the restore command to restore files by name.
-d
This flag indicates that the file parameter is a directory, and all files in the directory should be restored by name.
-q
This flag specifies that the medium specified by the -f flag is ready for use and a prompt is not required.
Restoring a User File System

This section shows the method for restoring a full level 0 backup of a file system or directory:


# smit restfilesys

This starts smit in the process for restoring a file system or directory:


                              Restore a Filesystem

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[Entry Fields]
* Restore DEVICE [/dev/fd0] +/
* Target DIRECTORY [.] /
VERBOSE output? yes +
Number of BLOCKS to read in a single input [] #
operation






F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

This causes the following command to be executed:


# cd . ; restore -rq -f /dev/rmt0 -v

This changes directory to the target directory and restores a complete file system. The file parameter would be ignored in this case, even if included.

-r
This flag specifies that a whole file system is to be restored.
-q
This flag specifies that the media is ready for reading and a prompt is not required.
-f
This flag indicates the media device to be read from (the tape device at rmt0 in this case).
-v
This flag shows more information about the restore process, such as file sizes.
Restoring a User Volume Group

In order to restore an entire user volume group, do the following:


# smit restvg

This starts smit in the process for remaking a volume group:


                             Remake a Volume Group

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[Entry Fields]
* Restore DEVICE or FILE [/dev/rmt0] +/
SHRINK the filesystems? no +
PHYSICAL VOLUME names [] +
(Leave blank to use the PHYSICAL VOLUMES listed
in the vgname.data file in the backup image)
Number of BLOCKS to read in a single input [] #
(Leave blank to use a system default)




F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

This causes the following system command to be executed:


# restvg -f /dev/rmt0

The restvg command will restore the complete volume group from the specified media. If the option to shrink the file system is chosen, this is equivalent to using the -s flag with the system command, and causes the logical volumes within the volume group to be recreated at the minimum size necessary to contain their file systems.

Physical volume names can also be appended to the command (or included in the smit menu), and if they are, the specified physical volumes will be used to restore the volume group to, rather than those found in the VGname.data file. The physical volumes must be empty, and not belong to any other volume groups.

Summary

This chapter has covered the actual physical management of the elements of storage subsystems that have so far been discussed in theory. The following tasks were detailed: