Practical Examples

This chapter contains a series of practical examples covering a variety of storage management and problem solving situations.

Each example has three major sections:

Planning

Before a key is pressed to configure the available equipment, careful planning is a wise investment. Hence, we begin by considering what volume group(s) our disks should belong to, and how should they be connected to the RISC System/6000.

We have nine SCSI-1 disks available that range in capacity from 355MB to 1.2GB. To show a number of different logical volume manager features coexisting in AIX Version 4, these disks can be initially arranged in four volume groups. The implementation of this volume group setup is shown and discussed in Storage Subsystem Design.

Since there are two SCSI-1 adapters available for the nine available SCSI-1 disks, then four disks can be connected to one adapter, and five disks can be connected to the second adapter. The CD-ROM and 8mm tape drive are not likely to be involved in as much I/O as the disks, so their location is not as critical.

A typical setup of this hardware can be seen from the output of the following lsdev command:


# lsdev -Cc disk
hdisk0 Available 00-08-00-0,0 670 MB SCSI Disk Drive
hdisk1 Available 00-08-00-1,0 670 MB SCSI Disk Drive
hdisk2 Available 00-08-00-2,0 355 MB SCSI Disk Drive
hdisk4 Available 00-07-00-0,0 1.2 GB SCSI Disk Drive (in 2.4 GB Disk Unit)
hdisk5 Available 00-07-00-1,0 1.2 GB SCSI Disk Drive (in 2.4 GB Disk Unit)
hdisk6 Available 00-07-00-2,0 1.2 GB SCSI Disk Drive (in 2.4 GB Disk Unit)
hdisk7 Available 00-07-00-3,0 1.2 GB SCSI Disk Drive (in 2.4 GB Disk Unit)
hdisk3 Available 00-08-00-3,0 320 MB SCSI Disk Drive
hdisk8 Available 00-07-00-4,0 857 MB SCSI Disk Drive
# lsdev -Cc cdrom
cd1 Available 00-08-00-4,0 CD-ROM Drive
#
# lsdev -Cc tape
rmt0 Available 00-08-00-6,0 2.3 GB 8mm Tape Drive

Note that the allocation of hdisk names resulted from the following:


**** Warning - For hdiskx, x may change ****

We expect when this system is reinstalled, the disk devices will be reconfigured. Depending on what is powered on at installation time, the disk name assigned to a particular device at a particular SCSI address may change.


The devices connected to the SCSI adapter in slot seven are inside a model 9334-500, which is designed to provide extra storage capacity for the RISC System/6000. The power supply in the 9334, and the second SCSI adapter in the RISC System/6000, combine to help to reduce the number of single points of failure that exists with the standard components in the RISC System/6000. All this equipment is located in close proximity in an appropriate office environment.

Once the hardware is correctly connected and working, the operating system needs to be installed, by default in the rootvg, before detailed configuration can be completed. To enable the rootvg to be mirrored, AIX Version 4 is initially installed only on hdisk0, before a copy is created on hdisk2. Please refer to the AIX Version 4.1 Installation Guide both before and during the initial installation of AIX Version 4 on hdisk0.

More detail regarding the level of AIX Version 4 in use can be obtained from the commands lslpp or uname (please refer to the AIX Version 4.1 Commands Reference for usage details). For these practical examples, we used:


# uname -a
AIX 9421A-UP bilbo 1 4 000004461000
#

The mirrored operating system implementation is described in the next section.
**** Warning - Always have good backups in place ****

It is very important that you are familiar with the backup concepts discussed elsewhere in this book (see Planning Backup Strategies), and also in the book AIX Version 4.1 System Management Guide: Operating System and Devices, (you may have AIX Version 4.1 System Management Guide: Operating System and Devices available in your AIX Version 4.1 Hypertext Information Base Library).

If any example in this document does not work in your particular circumstances, then reinstallation from a backup may be your only viable recovery method.


rootvg Mirroring - Implementation and Recovery

Once AIX Version 4 has been installed, this section describes how to create a mirror of rootvg and then how to test it. In the following example, a second copy of each logical volume on hdisk0 is made on hdisk2 which is an externally powered 355MB disk unit. This device is then powered off at various times to test the continued availability of the operating system. The availability test is discussed for two scenarios:

This section is based on the suggestions provided by the AIX Version 4.1 Hypertext Information Base Library articles Mirroring rootvg for Maximum Operating System Availability and Recovering a Disk Drive without Reformatting. As suggested by the title of this article, the availability test assumes that the disk media has not been damaged and thus has a valid, unique PVID. This means that the recovery steps can be as simple as a system reboot once the non-media related disk problem is fixed.

However, if the media fails:

  1. Remove all physical partitions from the failed disk.
  2. Remove the failed disk from the volume group.
  3. Add the new disk to the volume group.
  4. Rebuild the logical volume copies and synchronize them.
  5. Rebuild any single copy logical volumes and restore backups.
For more details, refer to the AIX Version 4.1 Hypertext Information Base Library article Recovering from Disk Drive Problems.

The performance implications of rootvg mirroring is not investigated in this example.

Command Line Summary

  1. Document your initial rootvg configuration; the following commands produce the necessary output:
    # lspv
    # lsvg -l rootvg
    # lsvg rootvg
    # lsvg -p rootvg
    # lslv -m hd9var
    # lsvg -M rootvg

  2. Create logical volume copies:
    • Turn off quorum checking:
      # chvg -a y -Q n rootvg

    • Add a physical volume to mirror to (if necessary), in these examples we are mirroring to a new physical volume called hdisk2:
      # extendvg -f rootvg hdisk2

      This command assumes that you wish to add a new physical volume called hdisk2 to the root volume group.

    • Create the mirrored copies for all logical volumes:
      # mklvcopy hd4 2 hdisk2

      Repeat this for every logical volume in rootvg:

      • hd1 (/home).
      • hd2 (/usr).
      • hd3 (/tmp).
      • hd8 (jfslog).
      • hd9var (/var).
      • hd6 (default paging space).
      • Any other logical volumes that you may have created, except the boot logical volume (see detailed guidance for the reasoning behind this).
  3. Create second boot logical volume, and build a boot image on it:
    # mklv -y hd5x  -t boot -a e rootvg 1 hdisk2
    # bosboot -a -l /dev/hd5x -d /dev/hdisk2

  4. Update bootlist:
    # bootlist -m normal hdisk0 hdisk2

  5. Synchronize rootvg copies:
    # varyonvg rootvg

This completes the command line overview of the process. A detailed description of how to achieve mirroring for the root volume group now follows.

Detailed Guidance

How to Document the Initial rootvg Configuration

The initial layout of rootvg can be seen from the output of the following commands. More examples describing the use of these commands, and similar variations to them, are provided in Storage Management Files and Commands Summary.

In order to see how to use smit to execute most of these commands so that the output can be viewed in the smit.log file, then please refer to How to Document the Volume Group Design.


# lspv
hdisk0 00014732b1bd7f57 rootvg
hdisk1 0001221800072440 newvg
hdisk2 00012218da42ba76 None
hdisk6 000002007bb618f5 myvg
hdisk7 000002007bb623c1 None
hdisk3 0002479088f5f347 None
#
# lsvg -l rootvg
rootvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
hd6 paging 8 8 1 open/syncd N/A
hd5 boot 1 1 1 closed/syncd N/A
hd8 jfslog 1 1 1 open/syncd N/A
hd4 jfs 1 1 1 open/syncd /
hd2 jfs 50 50 1 open/syncd /usr
hd9var jfs 3 3 1 open/syncd /var
hd3 jfs 2 2 1 open/syncd /tmp
hd1 jfs 1 1 1 open/syncd /home
paging00 paging 16 16 1 open/syncd N/A
# lsvg rootvg
VOLUME GROUP: rootvg VG IDENTIFIER: 00000446899fd108
VG STATE: active PP SIZE: 4 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 159 (636 megabyte)
MAX LVs: 256 FREE PPs: 76 (304 megabyte)
LVs: 9 USED PPs: 83 (332 megabyte)
OPEN LVs: 8 QUORUM: 2
TOTAL PVs: 1 VG DESCRIPTORS: 2
STALE PVs: 0 STALE PPs 0
ACTIVE PVs: 1 AUTO ON: yes
#
# lsvg -p rootvg
rootvg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk0 active 159 76 28..24..00..00..24
#
# lslv -m hd9var |pg
hd9var:/var
hd9var:/var
LP PP1 PV1 PP2 PV2 PP3 PV3
0001 0074 hdisk0
0002 0003 hdisk0
0003 0004 hdisk0
#

It is also useful to document the complete current partition map of the rootvg volume group by using the command lsvg -M rootvg|pg. This command may produce a long output for a large volume group so its output is not included here. However, the outputs of lsvg -l rootvg and lslv -m hd9var clearly show, from the one to one ratio of logical to physical partitions, that no logical volume in the rootvg is currently mirrored.

How to Create the rootvg Logical Volume Mirror Copies

In this example, the mirrored rootvg consists of only two disks. This means that by default, one disk contains two copies of the VGDA. This disk is thus required to be operational to maintain quorum. To ensure that the rootvg volume group stays online when this disk fails, using the mirror logical volume copies, quorum needs to be turned off.

Turn the rootvg quorum function off by entering:

  1. smitty vg.
  2. From the Volume Groups menu select Set Characteristics of a Volume Group.
  3. From the Set Characteristics of a Volume Group menu, select Change a Volume Group.
  4. On the menu Change a Volume Group, type in rootvg for the option labelled VOLUME GROUP name and press the Enter key (this could also be selected from the option F4=List).
  5. Change the QUORUM field so the screen looks like:
                                 Change a Volume Group

    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [Entry Fields]
    * VOLUME GROUP name rootvg
    * Activate volume group AUTOMATICALLY yes +
    at system restart?
    * A QUORUM of disks required to keep the volume no +
    group on-line ?







    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

  6. As suggested by Enter=Do, press the Enter key.

When smit returns OK, a second disk is added to rootvg so that mirror copies of all rootvg logical volumes can be created on it.

To add hdisk2 to the rootvg:

  1. Use F3=Cancel to return the menu named Set Characteristics of a Volume Group.
  2. From this menu, select Add a Physical Volume to a Volume Group.
  3. Type rootvg and hdisk2 so that the screen looks like:
                        Add a Physical Volume to a Volume Group

    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [Entry Fields]
    * VOLUME GROUP name [rootvg]
    * PHYSICAL VOLUME names [hdisk2]












    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

  4. As suggested by Enter=Do, press the Enter key.
  5. Use F10=Exit to return to the command prompt.

Now create a mirrored copy of all file systems, the file systems log, and the paging spaces on hdisk2.

For the root file system:

  1. Type smitty lv.
  2. Select Set Characteristic of a Logical Volume.
  3. Select Add a Copy to a Logical Volume.
  4. To select the root file system, either type hd4 and press the Enter=Do key, or press F4=List to display a screen that looks like:
                             Add Copies to a Logical Volume

    Type or select a value for the entry field.
    Press Enter AFTER making all desired changes.
    _______________________________________________________________________
    | LOGICAL VOLUME name |
    * | |
    | |
    | Move cursor to desired item and press Enter. |
    | |
    | loglv00 jfslog 1 1 1 closed/syncd N/A |
    | lv00 jfs 1 2 2 closed/stale /myfs|
    | hd6 paging 8 8 1 open/syncd N/A |
    | hd5 boot 1 1 1 closed/syncd N/A |
    | hd8 jfslog 1 1 1 open/syncd N/A |
    | hd4 jfs 1 1 1 open/syncd / |
    | hd2 jfs 50 50 1 open/syncd /usr |
    | hd9var jfs 3 3 1 open/syncd /var |
    | hd3 jfs 2 2 1 open/syncd /tmp |
    | hd1 jfs 1 1 1 open/syncd /home|
    | paging00 paging 16 16 1 open/syncd N/A |
    | |
    | F1=Help F2=Refresh F3=Cancel |
    F1| F8=Image F10=Exit Enter=Do |
    F5| /=Find n=Find Next |
    |______________________________________________________________________|

  5. Press the down key until the line that contains hd4 is highlighted, and then press the Enter=Do key.
  6. Move the cursor again to the field named NEW TOTAL number of logical partition copies and then use the Tab key to select the value 2.
  7. Leave the field SYNCHRONIZE the data in the new logical partition copies? with its default of no since we'll synchronize it later in How to Synchronize rootvg.
  8. Move the cursor to the field named PHYSICAL VOLUME names and then either type in hdisk2 or use the F4=List function key to select it so the screen looks like:
                             Add Copies to a Logical Volume

    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [Entry Fields]
    * LOGICAL VOLUME name hd4
    * NEW TOTAL number of logical partition 2 +
    copies
    PHYSICAL VOLUME names [hdisk2] +
    POSITION on physical volume center +
    RANGE of physical volumes minimum +
    MAXIMUM NUMBER of PHYSICAL VOLUMES [32] #
    to use for allocation
    Allocate each logical partition copy yes +
    on a SEPARATE physical volume?
    File containing ALLOCATION MAP []
    SYNCHRONIZE the data in the new no +
    logical partition copies?



    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

  9. When smit returns OK to indicate that the command is complete, use the F3=Cancel key a few times to return to the menu with the title Logical Volumes.
  10. Repeat the above copy creation for each of the following logical volumes:
    1. Copy hd1 that contains /home.
    2. Copy hd2 that contains /usr.
    3. Copy hd3 that contains /tmp.
    4. Copy hd8 that contains the file system log.
    5. Copy hd9var that contains /var.
    6. Copy hd6 that contains the default paging device.
    7. Copy paging00 that contains a second paging device.
      **** Warning - Check your dump device ****

      If you have a new AIX Version 4 system, then the hd6 logical volume is also likely to be the system dump device. This can be checked by the command sysdumpdev.

      If hd6 is the dump device and you want to be able to capture a valid dump, then you must change the primary dump device by using the command sysdumpdev -p /dev/dump_device_name -P.

      Alternatively, you can follow the smit menus obtained from the command smitty sysdumpdev to check and, if necessary, change the primary dump device.

      Do not mirror the dump device. Please refer to the article Developing a Logical Volume Strategy in AIX Version 4.1 Hypertext Information Base Library. Any dumps to a mirrored dump device will fail.


    8. We can easily check that the copies have been created by using the following lsvg command:
      # lsvg -l rootvg
      rootvg:
      LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
      hd6 paging 8 16 2 open/stale N/A
      hd5 boot 1 1 1 closed/syncd N/A
      hd8 jfslog 1 2 2 open/stale N/A
      hd4 jfs 1 2 2 open/stale /
      hd2 jfs 50 100 2 open/stale /usr
      hd9var jfs 3 6 2 open/stale /var
      hd3 jfs 2 4 2 open/stale /tmp
      hd1 jfs 1 2 2 open/stale /home
      paging00 paging 16 32 2 open/stale N/A
      #

      Note that the logical volumes are currently in a stale state. This reflects the fact that the data in the most recently created copies is older than that in the original copies; the data is synchronized in a subsequent step.

    9. As stated in the AIX Version 4.1 Hypertext Information Base Library article Mirroring rootvg for Maximum Operating System Availability, the creation of a mirror copy of hd5, the boot logical volume, is not recommended. Instead, create a new boot logical volume called hd5x.
      1. Select, in the Logical Volumes menu, the option Add a Logical Volume.
      2. When prompted for the VOLUME GROUP name, type in rootvg and press Enter=Do, or use F4=List to select it.
      3. In the Add a Logical Volume menu, leave all entries as default, except for Logical volume NAME, Number of LOGICAL PARTITIONS, PHYSICAL VOLUME names, and Logical volume TYPE, so that the screen looks like:
                                      Add a Logical Volume

        Type or select values in entry fields.
        Press Enter AFTER making all desired changes.

        [TOP] [Entry Fields]
        Logical volume NAME [hd5x]
        * VOLUME GROUP name rootvg
        * Number of LOGICAL PARTITIONS [2] #
        PHYSICAL VOLUME names [hdisk2] +
        Logical volume TYPE [boot]
        POSITION on physical volume outer_edge +
        RANGE of physical volumes minimum +
        MAXIMUM NUMBER of PHYSICAL VOLUMES [] #
        to use for allocation
        Number of COPIES of each logical 1 +
        partition
        Mirror Write Consistency? yes +
        Allocate each logical partition copy yes +
        on a SEPARATE physical volume?
        [MORE...9]

        F1=Help F2=Refresh F3=Cancel F4=List
        F5=Reset F6=Command F7=Edit F8=Image
        F9=Shell F10=Exit Enter=Do

        Note that you can use any Logical volume NAME, and you can use more than one physical partition, although this does waste space since hd5 only occupies 4MB. For the Logical volume TYPE, you must type in the word boot to ensure that you do not get the default type of jfs, which is for an ordinary jfs file system like /home. Finally, do not forget to type hdisk2 for the PHYSICAL VOLUME name so that hd5x is not created on hdisk0, which is where hd5, the original boot logical volume, exists.

      4. Use F10=Exit to exit smit when the command completion is indicated by:
                                         COMMAND STATUS

        Command: OK stdout: yes stderr: no

        Before command completion, additional instructions may appear below.

        hd5x
















        F1=Help F2=Refresh F3=Cancel F6=Command
        F8=Image F9=Shell F10=Exit /=Find
        n=Find Next

      5. Now that hd5x exists, build a boot image on it by entering the following command after using the F10=Exit key to leave smit.
        # bosboot -a -l /dev/hd5x -d /dev/hdisk2

        bosboot: Boot image is 4259 512 byte blocks.

        The output will appear after approximately 30 seconds.
        **** Warning - Be careful with bosboot ****

        It is very important to be aware of the following advice given in the AIX Version 4.1 Hypertext Information Base Library article Mirroring rootvg for Maximum Operating System Availability.

        If you put on a ptf that performs a bosboot or personally do bosboot and you are mirroring the rootvg, you must remember to do a bosboot to the secondary /blv.

        Furthermore, we suggest that you execute the command bootlist -m normal hdisk0 hdisk2, and then reboot using hd5 which is on hdisk0, before you execute any command that calls the bosboot command.

        If you do not do this, you may get errors such as:


        installp:  bosboot verification starting...

        0301-168 bosboot: The current boot logical volume, /dev/hd5,
        does not exist on /dev/hdisk2.
        The installation or updating script is unable to continue
        installp: An error occurred during bosboot processing.
        Please correct the problem and rerun installp.


  11. Now that all logical volumes in the rootvg exist with their primary copy on hdisk0 and their mirror copy on hdisk2, the list of devices to attempt to boot from in normal mode needs to be updated so the RISC System/6000 can boot from hdisk2 if hdisk0 is not available.

    Use the command:


    #
    # bootlist -m normal hdisk0 hdisk2
    #

How to Synchronize rootvg

The newly created mirror copies need to be synchronized with the originals to complete the creation of a mirrored rootvg. This can be done with the command syncvg -v rootvg, or, to use smit:

  1. Type smitty vg.
  2. From the Volume Groups menu, select Activate a Volume Group.
  3. For the field VOLUME GROUP name, type rootvg or use the F4=List to select it so that the screen looks like:
                                Activate a Volume Group

    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [Entry Fields]
    * VOLUME GROUP name [rootvg] +
    RESYNCHRONIZE stale physical partitions? yes +
    Activate volume group in SYSTEM no +
    MANAGEMENT mode?
    FORCE activation of the volume group? no +
    Warning--this may cause loss of data
    integrity.










    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

  4. Press Enter=Do to execute the synchronization process.

    This smit menu actually starts the command varyonvg rootvg which can be seen from the F6=Command function key. The varyonvg command actually starts the syncvg command before varyonvg exits. This can be seen by pressing F10=Exit when smit returns an OK prompt, and then searching through the output of the ps -ef command to find:


        root 7066    1   0 13:33:36  pts/0  0:00 bsh /usr/sbin/syncvg -v root
    root 7264 6494 0 13:17:41 pts/1 0:00 -ksh
    root 7604 7066 1 13:34:50 pts/0 0:00 lresynclv -l 00000446899fd108
    root 7858 7264 5 13:34:46 pts/1 0:00 lsvg -l rootvg
    root 8118 6056 34 13:35:05 pts/0 0:00 ps -ef


    **** Warning - syncvg continues ****

    Although smit quickly returns an OK prompt, syncvg continues to run, and, depending on the size of your rootvg, may run for a long time.

    As well as the previous ps -ef command, you can regularly repeat the following command until the field STALE PPs has a value of 0, which indicates synchronization is complete.


    # lsvg rootvg
    VOLUME GROUP: rootvg VG IDENTIFIER: 00000446899fd108
    VG STATE: active PP SIZE: 4 megabyte(s)
    VG PERMISSION: read/write TOTAL PPs: 243 (972 megabytes)
    MAX LVs: 256 FREE PPs: 76 (304 megabytes)
    LVs: 10 USED PPs: 167 (668 megabytes)
    OPEN LVs: 8 QUORUM: 1
    TOTAL PVs: 2 VG DESCRIPTORS: 3
    STALE PVs: 1 STALE PPs 76
    ACTIVE PVs: 2 AUTO ON: yes

    In this example, syncvg required approximately one hour and 15 minutes on a quiesced system.

    Note that the value of the QUORUM: field is 1 because the quorum function has been turned off.


How to Check the Implementation of a Mirrored rootvg

When syncvg finally completes, execute the following command:


# lsvg -l rootvg
rootvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
hd6 paging 8 16 2 open/syncd N/A
hd5 boot 1 1 1 closed/syncd N/A
hd8 jfslog 1 2 2 open/syncd N/A
hd4 jfs 1 2 2 open/syncd /
hd2 jfs 50 100 2 open/syncd /usr
hd9var jfs 3 6 2 open/syncd /var
hd3 jfs 2 4 2 open/syncd /tmp
hd1 jfs 1 2 2 open/syncd /home
paging00 paging 16 32 2 open/syncd N/A
hd5x boot 2 2 1 closed/syncd N/A
#

This shows there now exists two boot type logical volumes, two copies of all other logical volumes, (indicated by the 2:1 ratio of PPs to LPs), and all the rootvg logical volumes are now in a syncd state.

Other commands you can use to check the new rootvg configuration include:


# lslv -m hd9var |pg
hd9var:/var
LP PP1 PV1 PP2 PV2 PP3 PV3
0001 0074 hdisk0 0003 hdisk2
0002 0003 hdisk0 0004 hdisk2
0003 0004 hdisk0 0005 hdisk2
# lsvg -p rootvg
rootvg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk0 active 159 76 28..24..00..00..24
hdisk2 active 84 0 00..00..00..00..00
# lsvg -M rootvg|pg
rootvg
more output.....
hdisk0:134 hd2:48:1
hdisk0:135 hd2:49:1
hdisk0:136-159
hdisk2:1 hd2:49:2
hdisk2:2 hd2:50:2
hdisk2:3 hd9var:1:2
more output.....

Notice that hdisk2 is full. This means that we can not currently extend the rootvg logical volumes that have mirror copies on hdisk2. However, we can create new non-mirrored logical volumes on hdisk0 in the rootvg volume group, but their data would be unavailable if hdisk0 fails.

The implementation of a mirrored rootvg is now complete.

How to Test the rootvg Logical Volume Mirror Copies

Recall that the bootlist command used earlier forces the RISC System/6000 to try to boot from hdisk0 before hdisk2. Hence, the first test requires AIX Version 4 to be shut down, and then the internal disks must be disconnected from their power cables before the RISC System/6000 is powered back on.


**** Warning - Handle hardware with care ****

Ensure that a qualified individual is available to follow the correct procedures required when a RISC System/6000 unit is serviced.


There are error messages displayed during the boot sequence that are associated with the powered off internal disks. You can easily confirm there is a disk problem from the following commands:


# lsvg -l rootvg
rootvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
hd6 paging 8 16 2 open/syncd N/A
hd5 boot 1 1 1 closed/syncd N/A
hd8 jfslog 1 2 2 open/stale N/A
hd4 jfs 1 2 2 open/stale /
hd2 jfs 50 100 2 open/stale /usr
hd9var jfs 3 6 2 open/stale /var
hd3 jfs 2 4 2 open/stale /tmp
hd1 jfs 1 2 2 open/stale /home
paging00 paging 16 32 2 open/syncd N/A
hd5x boot 2 2 1 closed/syncd N/A
#
# lsdev -Cc disk
hdisk0 Defined 00-08-00-0,0 670 MB SCSI Disk Drive
hdisk1 Defined 00-08-00-1,0 670 MB SCSI Disk Drive
hdisk2 Available 00-08-00-2,0 355 MB SCSI Disk Drive
hdisk4 Available 00-07-00-0,0 1.2 GB SCSI Disk Drive (in 2.4 GB Disk Unit
hdisk5 Available 00-07-00-1,0 1.2 GB SCSI Disk Drive (in 2.4 GB Disk Unit
hdisk6 Available 00-07-00-2,0 1.2 GB SCSI Disk Drive (in 2.4 GB Disk Unit
hdisk7 Available 00-07-00-3,0 1.2 GB SCSI Disk Drive (in 2.4 GB Disk Unit
hdisk3 Available 00-08-00-3,0 320 MB SCSI Disk Drive

The lsvg command shows how the rootvg logical volumes are now in a stale state, and the lsdev shows that the internal hdisk0 is not available for normal operations.

The command:


# bootinfo -b
hdisk2
#

shows that the mirrored rootvg configuration has worked, since AIX Version 4 has now booted from hdisk2 instead of hdisk0.
**** Warning - do not change rootvg configuration ****

Do not make any changes to rootvg configuration at this point since this information would only be recorded on the copies on hdisk2. Since this test sequence involves a reboot next from hdisk0 while hdisk2 remains powered off, then the VGSA on hdisk0 will be flagged as having the most recent copies of the rootvg logical volumes.

Hence the hdisk0 copies are used to overwrite the hdisk2 copies during the subsequent synchronization step after the external 355MB hdisk2 device is powered back on.


Shut down the RISC System/6000 by executing the shutdown -f command, so that power can then be restored to hdisk0. Since hdisk2 is an external 355MB disk unit, leave it powered off when the RISC System/6000 is turned on so that the RISC System/6000 will now boot from hdisk0 again instead of hdisk2.

Among the boot messages, you will see:


varyonvg: Volume group rootvg is varied on.
PV Status: hdisk0 00014732b1bd7f57 PVACTIVE
hdisk2 00012218da42ba76 PVMISSING
0516-068 lresynclv: Unable to completely resynchronize volume. Run
diagnostics if necessary.
0516-932 /usr/sbin/syncvg: Unable to synchronize volume group rootvg.
0516-068 lresynclv: Unable to completely resynchronize volume. Run
diagnostics if necessary.
0516-932 /usr/sbin/syncvg: Unable to synchronize volume group rootvg.

This is normal since hdisk2 really is unavailable. The varyonvg rootvg command is run automatically during the boot sequence and thus the above errors are recorded.

We can again confirm that hdisk0 was used to boot by the command:


# bootinfo -b
hdisk0

You can also repeat the command lsvg -l rootvg to verify that the logical volumes are still in a stale state.

In this example, we were surprised that the paging devices hd6 and paging00 were not in a stale state. This may change in a later level of AIX Version 4 than the one that we tested.

However, you can force the paging devices to become stale by starting many memory intensive processes in a loop. For example, you can use the graphical version of the AIX Version 4.1 Hypertext Information Base Library from the command info &.

We can see that the paging devices are now stale from:


# lsvg -l rootvg
rootvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
hd6 paging 8 16 2 open/stale N/A
hd5 boot 1 1 1 closed/syncd N/A
hd8 jfslog 1 2 2 open/stale N/A
hd4 jfs 1 2 2 open/stale /
hd2 jfs 50 100 2 open/stale /usr
hd9var jfs 3 6 2 open/stale /var
hd3 jfs 2 4 2 open/stale /tmp
hd1 jfs 1 2 2 open/stale /home
paging00 paging 16 32 2 open/stale N/A
hd5x boot 2 2 1 closed/syncd N/A
#


**** Warning - Don't be misled by lsps ****

Note that the output of lsps:


# lsps -a
Page Space Physical Volume Volume Group Size %Used Active Auto
paging00 hdisk0 rootvg 64MB 43 yes yes
paging00 hdisk2 rootvg 64MB 43 yes yes
hd6 hdisk0 rootvg 32MB 100 yes yes
hd6 hdisk2 rootvg 32MB 100 yes yes
#

may indicate that all copies of paging devices are accessible, when in fact those on hdisk2 are not.

More detailed information about the status of each logical partition and physical partition can be obtained from the commands discussed in Storage Management Files and Commands Summary. For example, use:


# lslv -p hdisk2 hd6
hdisk2:hd6:N/A
STALE USED STALE USED STALE STALE USED USED STALE STALE
STALE STALE STALE STALE STALE STALE STALE

0001? 0002? 0003? 0004? 0005? 0006? 0007? 0008? STALE STALE
USED USED USED USED USED USED USED
more output.....

to see that all copies of hd6 logical partitions on hdisk2 happen to be in a STALE state, as indicated by the question mark.

Also note that not all physical partitions on hdisk2 are STALE; those physical partitions that have not been accessed for any I/O operation are still in a USED state. However, as seen from the lsvg -l rootvg command, the logical volumes that these physical partitions belong to have been marked stale.

How to return to a synchronized state

The most simple method is to reboot the RISC System/6000. However in this example, assuming there are other users currently on the system, configure the defined hdisk2 using the following steps:

  1. Execute the command smitty devices.
  2. Select Fixed Disk.
  3. Select Configure a Defined Disk.
  4. From the Disk sub-menu that appears, select hdisk2 Defined 00-08-00-2,0 355 MB SCSI Disk Drive.
  5. Press F10=Exit when smit returns an OK prompt.

After you've confirmed that the disk is Available from the lsdev -Cc disk command, repeat the synchronization step used in the creation of the mirrored rootvg. Hence:

  1. Execute the command smitty vg.
  2. Select Activate a Volume Group.
  3. Type rootvg or use F4=List to select it.
  4. Press the Enter=Do key.
  5. Press F10=Exit when smit returns an OK prompt.

**** Note ****

We suggest that you use the varyonvg command rather than both the chpv and syncvg commands to synchronize the rootvg volume group.


From the output of the following command:


# iostat

tty: tin tout avg-cpu: % user % sys % idle % iowait
0.3 12.9 6.1 5.1 78.9 10.0

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 14.9 55.5 5.3 231182 65737
hdisk3 0.0 0.2 0.0 1052 0
hdisk1 0.2 1.2 0.0 1070 5191
hdisk4 0.2 2.8 0.0 14777 69
hdisk5 0.1 1.6 0.0 8694 46
hdisk6 0.0 0.6 0.0 3172 154
hdisk7 0.0 0.2 0.0 1052 0
hdisk2 1.2 10.9 0.1 43 58171
#

we can see that, as discussed earlier, the copies on hdisk0 are most recent and are being read so that a write operation can update the copies on hdisk2 now that it is available again.
Simulation of disk failure during normal operations

This test uses hdisk2, a 355MB external disk device. Since this is an external disk, we can power it off while the system is being used to verify that normal processing is not halted (we are not concerned about any performance implications in this scenario).

First execute the lsvg -l rootvg command to confirm that all rootvg logical volumes are now in a syncd state.

Power off hdisk2 and repeat the command to obtain output such as:


# lsvg -l rootvg
rootvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
hd6 paging 8 16 2 open/syncd N/A
hd5 boot 1 1 1 closed/syncd N/A
hd8 jfslog 1 2 2 open/stale N/A
hd4 jfs 1 2 2 open/stale /
hd2 jfs 50 100 2 open/stale /usr
hd9var jfs 3 6 2 open/stale /var
hd3 jfs 2 4 2 open/syncd /tmp
hd1 jfs 1 2 2 open/syncd /home
paging00 paging 16 32 2 open/syncd N/A
hd5x boot 2 2 1 closed/syncd N/A
#

Note that not all rootvg logical volumes are now stale.

The other logical volumes are not currently involved in an I/O operation and so they remain in a syncd state.

Once power is restored to hdisk2, it can again be again synchronized with hdisk0 by using the varyonvg rootvg or the corresponding smitty vg selection.

Storage Subsystem Design

For this section, it is very beneficial for the reader to become familiar with the concepts discussed in:

Once the available hardware has been reviewed (please refer to Planning), the next step in storage subsystem design is to plan and design your volume group configuration.

A Volume Group Design Example

Two major aims in storage subsystem design are to achieve the optimum performance for disk access requests (in other words, the fastest disk access possible), and also to achieve the highest possible availability (in other words, provide the system with as good a chance as is practically possible that disk access requests will not fail). These aims are discussed in more detail elsewhere, but it is important to note at this point that these aims can often interfere with each other. In other words, a high availability configuration will often result in slower access times to the data that is now stored in a highly available state.

Sometimes, a particular configuration option will be beneficial for both performance and availability, but then there is likely to be an associated extra cost for that choice. In this example, the price paid for a second SCSI adapter has bought us the option of placing some disk devices on this second adapter. This can improve performance because the I/O requests workload can now be shared between both adapters. This also improves availability if disk mirroring is implemented using disks on different adapters, because we would then still have access to one disk if one of the adapters failed.

However, our particular configuration and storage needs does not allow us to fully utilize this benefit. Recall from Planning that for this example, we have a total of nine disks to allocate. We have already allocated one internal 670MB disk and the external 355MB disk to the rootvg volume group. We are not interested in the performance of disk I/O for the logical volumes in the rootvg, so we've allocated the slowest disks for the rootvg. We did not use both internal 670MB disks for rootvg because we wanted to be able to power off a rootvg disk while the system is being used; please refer to rootvg Mirroring - Implementation and Recovery.

This leaves us with seven more disks to allocate, two on the SCSI adapter in slot eight, and five on the adapter in slot seven. Ideally, a system with multiple disks should consist of multiple volume groups; usually such a system should have at least one non-rootvg volume group.

The guidelines for volume group design are discussed elsewhere, see Storage Subsystem Design, and also refer to the article Developing a Volume Group Strategy, but for this example, we want to create three volume groups, primarily for safe and easy maintenance. This allows us to do different storage management related tasks in different volume groups, and hence we can isolate the effects of theses tasks. In other words, a volume group synchronization operation will potentially only result in extensive I/O in two or three disks, instead of say seven disks if they are all grouped together as one volume group. Multiple volume groups allow journaled file systems to be created in one volume group, and raw logical volumes can be used by databases in another volume group. Also, we can destroy the configuration of one volume group and its associated components (disks, logical volumes, data) during one example without affecting the integrity of data, file systems, and logical volumes used in other examples in other volume groups. Finally, we have the options of implementing different quorum characteristics in the different volume groups, and we can use a different physical partition size for each volume group.


**** Design Change - Now or later? ****

If you do not fully understand all the implications of a proposed design, but need to implement a design today, then do so provided that you at least understand that any future disk reallocation work may be a large job that may require significant system maintenance time, and possible end-user interruptions.

In our example, the seven disks left after the rootvg set up can be allocated to one volume group, or up to seven different volume groups. There are arguments for and against creating three volume groups. However, for expediency and the reasons outlined above, we shall create three volume groups, each with a physical partition size of 4MB, and we're ready to change this in the future if required.


The choice of three volume groups also illustrates that any design has to work with the available resources. We only have two disks available on the SCSI adapter in slot eight and hence the benefits of using a second adapter will only be available to at most two volume groups. Hence we need to prioritize the creation of our volume groups. In this example design, assume that the created logical volumes will require all available volume group physical partitions. Also assume that for some logical volumes, their content is such that performance is more critical than availability (for example, assume they store large archived databases). For other logical volumes, assume that availability is more critical (for example, a small customer database with names and phone numbers). Since this example has three volume groups, then one of these volume groups will have to sacrifice either performance or availability because it will use only one SCSI adapter. We place priority on the logical volumes that require optimal performance, so, in this example, create the following volume groups using the specified disks for the reasons stated:

Map Files Usage and Contents

A map file is used to specify exactly which physical partitions on which disks will contain the logical partitions for the primary, secondary, or tertiary copy of a logical volume. The physical partitions are allocated to logical partitions for a logical volume copy according to the order in which they appear in the map file. Each logical volume copy should have its own map file, and the map files of each logical volume copy should each allocate the same number of physical partitions. Hence it offers very precise control when a logical volume is first created (the primary copy), or when the secondary or tertiary copies of a logical volume are subsequently created in a mirrored environment.

Before map files are created, we need to check the following:

You can easily check this with the following commands:
# lspv -p hdisk5
hdisk5:
PP RANGE STATE REGION LV ID TYPE MOUNT POINT
1-58 free outer edge
59-115 free outer middle
116-172 free center
173-229 free inner middle
230-287 free inner edge
# lspv -p hdisk3
hdisk3:
PP RANGE STATE REGION LV ID TYPE MOUNT POINT
1-15 free outer edge
16-30 free outer middle
31-45 free center
46-60 free inner middle
61-75 free inner edge

An example of the use of map files is given in the AIX Version 4.1 Hypertext Information Base Library article Developing a Logical Volume Strategy. However, the examples that follow here in Storage Subsystem Design use the following map files to create logical volumes in perfvg:

When you use map files to specify exactly which physical partitions on a disk to use, you can ignore the inter-disk allocation policies specified by the smit options:

You can also ignore the intra-disk smit option:

When you use smit, these fields have default values which can be ignored because the map file physical partition allocation will have the higher precedence.

For example, the use of the two map files badmir.map and badmir.map2 to create the two copies of the perflv1 logical volume later will result in both copies being placed on hdisk5. Hence, this gives you the same result as you would obtain if you set RANGE of physical volumes to minimum. This is why this field must be ignored. If you try to change this field, you'll get an error like


0516-690 mklv: The -a, -e, -u, -s, and -c options cannot be
used with the -m option.
Usage: mklv [-a IntraPolicy] [-b BadBlocks] [-c Copies] [-d Schedule]
[-e InterPolicy] [-i] [-L Label] [-m MapFile] [-r Relocate]
[-s Strict] [-t Type] [ -u UpperBound] [-v Verify] [-w MWC]
[-x MaxLPs] [-y LVname] [-Y Prefix] [-S StripeSize] VGname NumberOfLPs
[PVname...]
Makes a logical volume.

A Design Example for Improved Availability

This section will show you how to implement a mirrored environment that will help you minimize the disruption caused by a hardware failure. The example in this section assumes that you accept the cost of the extra disk capacity required to implement mirroring.

If you do not have enough physical volumes to do this, then you can still improve your availability by specifying minimum as the target range of physical volumes during the creation of your logical volumes. This may be helpful if you know two physical volumes in a volume group are much more reliable than another, because if the less reliable physical volume fails, you may be able to access the logical volumes that exist on one of the good disks.

We have already discussed a mirrored rootvg volume group, so this example shows you how a non-rootvg volume group can be mirrored to provide higher availability than in a non-mirrored environment.

Since mirroring requires a minimum of two physical volumes we will also show how to identify these resources. We will use the name, availvg for our volume group and and for our logical volume and journaled file system we expect to use the names, availlv and availjfs respectively.

Command Line Summary

  1. First check to see what disks are available and that they are not assigned to an existing volume group:
    # lspv
    hdisk4 0000020158496d72 none
    hdisk6 000002007bb618f5 none
    #lsdev -Cc disk
    hdisk4 Available 00-07-00-0,0 1.2 GB SCSI Disk Drive (in 2.4 GB Disk Unit)
    hdisk6 Available 00-07-00-2,0 1.2 GB SCSI Disk Drive (in 2.4 GB Disk Unit)

  2. Create original non-rootvg using both physical volumes:
    # mkvg -f -y'availvg' 'hdisk4 hdisk6'

  3. Add a logical volume to the the volume group availvg creating two copies, each on a different physical volume. The logical volume will consist of six logical partitions:
    # mklv -y'availlv' -e'x' -c'2' -v'y' 'availvg' '6'

  4. Create a journaled file system, /availjfs, using the logical volume created above:
    # crfs -v jfs -d'availlv' -m'/availjfs' -A'yes' -p'rw' -t'no' \
    -a frag='4096' -a nbpi='4096' -a compress='no'

  5. Mount the journaled file system:
    # mount /availjfs

  6. Create a copy of the journaled log logical volume:
    mklvcopy -e'x' '-k' 'loglv00' '2'

  7. Turn off quorum checking:
    # chvg -a'y' -Q'n' 'availvg'

You now have a volume group with mirrored logical volumes and a file system mounted and ready to be used.

Detailed Description

The above summary steps have shown us how to create a mirrored volume group. In this section we will look at each command separately, showing its output and also verify that we have successfully created a mirrored volume group.

How to Create a Mirrored non-rootvg Volume Group

In order to create a mirrored volume group we need two or more free physical volumes. In our example we have chosen hdisk4 and hdisk6, each capable of being powered on and off separately. This will be useful in simulating a physical volume failure by switching off one of the active physical volumes. A mirrored logical volume, availlv, will be created with a size of six logical partitions (24MB), with each copy on a separate physical volume.

In order to achieve high availability we need to make sure that for each of the physical volumes selected for the volume group:

  1. First let us look at the availability of the physical volumes for the volume group. Execute the lspv command to check which physical volumes are currently not assigned to a volume group:
    # lspv
    hdisk0 00014732b1bd7f57 rootvg
    hdisk1 000137231982c0f2 stripevg
    hdisk2 00012218da42ba76 rootvg
    hdisk3 00000201dc8b0b32 perfvg
    hdisk4 0000020158496d72 none
    hdisk5 0002479088f5f347 perfvg
    hdisk6 000002007bb618f5 none
    hdisk7 00000446431550c9 none
    hdisk8 0001221800072440 stripevg

    Since the physical volumes hdisk4, hdisk6, and hdisk7 are attached to the same SCSI adapter and do not have their own power supply, we do not have the optimal availability scenario. However, each of the physical volumes have their own power switch, and so hdisk4 and hdisk6 will be chosen, since we will be able to simulate a hard disk failure by switching off the power to either one of these two disks.

  2. Create a volume group that contains these two physical volumes by executing the smitty mkvg command. On the following screen enter the name of the volume group and the names of the physical volumes we have identified. After filling out the fields press Enter.
                                   Add a Volume Group

    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [Entry Fields]
    VOLUME GROUP name [availvg]
    Physical partition SIZE in megabytes 4 +
    * PHYSICAL VOLUME names [hdisk4 hdisk6] +
    Activate volume group AUTOMATICALLY yes +
    at system restart?
    * ACTIVATE volume group after it is yes +
    created?
    Volume Group MAJOR NUMBER [] +#

    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

    Press the F10 key after smit returns with OK.

    We have now created the volume group, availvg and are ready to add a logical volume. Note that the volume group is automatically varied on.

  3. To create the availlv logical volume:
    1. Execute the smitty mklv command:
                                    Add a Logical Volume

      Type or select values in entry fields.
      Press Enter AFTER making all desired changes.

      [TOP] [Entry Fields]
      Logical volume NAME [availlv]
      * VOLUME GROUP name availvg
      * Number of LOGICAL PARTITIONS [6] #
      PHYSICAL VOLUME names [] +
      Logical volume TYPE []
      POSITION on physical volume outer_middle +
      RANGE of physical volumes maximum +
      MAXIMUM NUMBER of PHYSICAL VOLUMES [] #
      to use for allocation
      Number of COPIES of each logical 2 +
      partition
      Mirror Write Consistency? yes +
      Allocate each logical partition copy yes +
      on a SEPARATE physical volume?
      [MORE...9]

      F1=Help F2=Refresh F3=Cancel F4=List
      F5=Reset F6=Command F7=Edit F8=Image
      F9=Shell F10=Exit Enter=Do

    2. This is the first screen of this smit menu. On this screen enter information for the following fields as shown above:
      • Logical volume NAME
      • Number of LOGICAL PARTITIONS
      • RANGE of physical volumes
      • Number of COPIES of each logical partition

      For the range and copies fields use F4=List function key and select the appropriate value. The RANGE field must be maximum so that each logical partition copy is placed on a separate physical volume. The COPIES field must be set to 2 so that two copies of each logical partition are created.

    3. To access information on the second screen use the PageDown key on the keyboard. The second screen of the smit menu looks like:
                                    Add a Logical Volume

      Type or select values in entry fields.
      Press Enter AFTER making all desired changes.

      [MORE...9] [Entry Fields]
      Number of COPIES of each logical 2 +
      partition
      Mirror Write Consistency? yes +
      Allocate each logical partition copy yes +
      on a SEPARATE physical volume?
      RELOCATE the logical volume during reorganization? yes +
      Logical volume LABEL []
      MAXIMUM NUMBER of LOGICAL PARTITIONS [128]
      Enable BAD BLOCK relocation? yes +
      SCHEDULING POLICY for writing logical parallel +
      partition copies
      Enable WRITE VERIFY? yes +
      File containing ALLOCATION MAP []
      Stripe Size? [Not Striped] +
      [BOTTOM]

      F1=Help F2=Refresh F3=Cancel F4=List
      F5=Reset F6=Command F7=Edit F8=Image
      F9=Shell F10=Exit Enter=Do

      On this screen use the F4 key and select yes for the field Enable WRITE VERIFY?. The effect of this to read the data after it has been written to make sure that the write was successful.

    4. Press Enter after making the above changes. When smit returns with OK, press the F10 key to exit smit.
    5. Execute the command lslv -m availlv to get information about the physical partition map for the logical volume availlv:
      # lslv -m availlv
      availlv:n/A
      LP PP1 PV1 PP2 PV2 PP3 PV3
      0001 0082 hdisk4 0085 hdisk6
      0002 0082 hdisk6 0085 hdisk4
      0003 0083 hdisk4 0086 hdisk6
      0004 0083 hdisk6 0086 hdisk4
      0005 0084 hdisk4 0087 hdisk6
      0006 0084 hdisk6 0087 hdisk4


      **** Note copy location **** Each logical partition copy is placed on a different physical volume.
    6. Let us now check to see which region of each physical volume has been used for logical volume availlv. Execute the following commands:
      # lspv -p hdisk4
      hdisk4:
      PP RANGE STATE REGION LV ID TYPE MOUNT POINT
      1-58 free outer edge
      59-81 free outer middle
      82-87 used outer middle availlv jfs N/A
      88-115 free outer middle
      116-172 free center
      173-229 free inner middle
      230-287 free inner edge
      # lspv -p hdisk6
      hdisk6:
      PP RANGE STATE REGION LV ID TYPE MOUNT POINT
      1-58 free outer edge
      59-81 free outer middle
      82-87 used outer middle availlv jfs N/A
      88-115 free outer middle
      116-172 free center
      173-229 free inner middle
      230-287 free inner edge

      The above output shows that on both hdisk4 and hdisk6 the outer-middle region of the disk is used, as expected.
  4. Now type smitty jfs and select the menu option Add a Journaled File System on a Previously Defined Logical Volume . On the smit screen, first press F4 then choose the logical volume availlv from the list and press Enter. Then enter /availjfs for the field MOUNT POINT, and change the default setting for Mount AUTOMATICALLY at system restart? to yes by pressing the F4 key and choosing yes from the list. The screen should look like the following when all the fields have been entered:
          Add a Journaled File System on a Previously Defined Logical Volume


    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [Entry Fields]
    * LOGICAL VOLUME name availlv +
    * MOUNT POINT [/availjfs]
    Mount AUTOMATICALLY at system restart? yes +
    PERMISSIONS read/write +
    Mount OPTIONS [] +
    Start Disk Accounting? no +
    Fragment Size (bytes) 4096 +
    Number of bytes per inode 4096 +
    Compression algorithm no +

    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

    Press Enter when all the fields have been filled out. The file system is created when smit returns with OK as shown below:


                              COMMAND STATUS

    Command: OK stdout: yes stderr: no

    Before command completion, additional instructions may appear below.

    Based on the parameters chosen, the new /availjfs JFS file system
    is limited to a maximum size of 134217728 (512 byte blocks)

    New File System size is 49152













    F1=Help F2=Refresh F3=Cancel F6=Command
    F8=Image F9=Shell F10=Exit /=Find
    n=Find Next

    Press F10 to exit smit.

  5. Since this is the first journaled file system created in the volume group availvg, a log logical volume (journal log) is automatically created. This log logical volume also needs to be mirrored through the following procedure:
    1. To identify the name of the journal log within the volume group availvg, execute the command:
      # lsvg -l availvg
      availvg:
      LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
      availlv jfs 6 12 2 closed/syncd /availjfs
      loglv00 jfslog 1 1 1 closed/syncd N/A

      From the above output we can see that the journal log is called loglv00 since it is of type jfslog.

    2. Execute the following command to find out which physical volume is used to hold the journal log:
      # lslv -m loglv00
      loglv00:n/A
      LP PP1 PV1 PP2 PV2 PP3 PV3
      0001 0088 hdisk4

      From the output of the above two commands, also note that only one physical partition has been allocated to loglv00 and it is not mirrored.

    3. Now we need to create a copy of the log logical volume (journal log). Type smitty mklvcopy and enter loglv00 for the LOGICAL VOLUME name field and press Enter. On the next smit screen change the content of:
      • NEW TOTAL number of logical partition copies to 2.
      • RANGE of physical volumes to maximum.
      • SYNCHRONIZE the data in the new logical partition copies? to yes.
      so that the screen looks like:
                     Add Copies to a Logical Volume



      Type or select values in entry fields.
      Press Enter AFTER making all desired changes.

      [Entry Fields]
      * LOGICAL VOLUME name loglv00
      * NEW TOTAL number of logical partition 2 +
      copies
      PHYSICAL VOLUME names [] +
      POSITION on physical volume outer_middle +
      RANGE of physical volumes maximum +
      MAXIMUM NUMBER of PHYSICAL VOLUMES [32] #
      to use for allocation
      Allocate each logical partition copy yes +
      on a SEPARATE physical volume?
      File containing ALLOCATION MAP []
      SYNCHRONIZE the data in the new yes +
      logical partition copies?



      F1=Help F2=Refresh F3=Cancel F4=List
      F5=Reset F6=Command F7=Edit F8=Image
      F9=Shell F10=Exit Enter=Do

      Press Enter after making the changes. When smit returns with OK press F10 to exit smit.

  6. Check that we have successfully mirrored the two logical volumes in the volume group by typing:
    # lsvg -l availvg
    availvg:
    LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
    availlv jfs 6 12 2 closed/syncd /availjfs
    loglv00 jfslog 1 2 2 closed/syncd N/A

    The output indicates that the jfslog loglv00 consists of one logical partition with each physical partition copy on a different physical volume. Likewise, for availlv, the 6 logical partitions consist of 12 physical partitions with each copy residing on a different physical volume.

  7. Now mount the file system /availjfs using the command:
    # mount /availjfs

  8. We must now turn off quorum checking so that in the event of losing 51% or more of the physical volumes (VGDAs), the volume group availvg is not varied off automatically. Execute the command smitty chvg and enter availvg for the field VOLUME GROUP name and press Enter. On the second smit screen, as shown below, change the field A QUORUM of disks required to keep the volume group on-line? to no by pressing F4 and selecting no from the list. Then press Enter.
                          Change a Volume Group


    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [Entry Fields]
    * VOLUME GROUP name availvg
    * Activate volume group AUTOMATICALLY yes +
    at system restart?
    * A QUORUM of disks required to keep the volume no +
    group on-line ?



    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

    When smit returns with OK press F10 to exit smit.

Verify a Mirrored Volume Group for Availability

We are now ready to test the mirrored volume group availvg. As explained before, the two physical volumes hdisk4 and hdisk6 are connected to the same SCSI adapter so we will not be able to test for SCSI failures. However, we can simulate a disk failure by powering off one of these physical volumes since each physical volume has its own power switch.

You can use a Korn shell script to simulate a disk failure and recovery. The script automatically generates some logical I/O to the volume group availvg using the dd command and then requests the user to switch off one of the physical volumes. Stale physical partitions are automatically detected and the user is once again prompted to power on the physical volume. Following this, the resynchronization of the stale partitions is then performed using the varyonvg command. The dd operations expects the InfoExplorer file, /usr/lpp/info/lib/en_US/aix41/cmds/cmds.rom, to be installed on the system.

You can use the script in the following example test sequence that checks the availability of the availvg volume group.

  1. Save the following script as availvg.ksh.
    # /var/tmp/availvg.ksh.out
    integer syncrun=0;
    integer cnt=1;
    while true
    do
    PS=`ps -ef | grep -v grep | grep "dd if=/usr/lpp/" | \
    awk '{print $8}'`
    if [ "$PS" != "timex" ]
    then
    echo dd number: $cnt started >> /var/tmp/availvg.ksh.out 2>&1
    timex dd if=/usr/lpp/info/lib/en_US/aix41/cmds/cmds.rom \
    of=/availjfs/cmds.rom.dd bs=100k >> \
    /var/tmp/availvg.ksh.out 2>&1 &
    cnt=cnt+1
    if [ $syncrun > 0 ]
    then
    ps -ef >> /var/tmp/availvg.ksh.out
    fi
    fi
    while true
    do
    echo "Checking for stale partitions." | \
    tee -a /var/tmp/availvg.ksh.out
    echo "Please wait..." | tee -a /var/tmp/availvg.ksh.out
    PPS=`lsvg availvg | grep "STALE PPs" | awk '{print $6}'`
    if [ "$PPS" = "0" ]
    then
    echo "Stale partitions not found." | \
    tee -a /var/tmp/availvg.ksh.out
    echo
    "To recreate stale partitions power off one disk unit" | \
    tee -a /var/tmp/availvg.ksh.out
    echo "and press enter. To quit press CTRL-C." | \
    tee -a /var/tmp/availvg.ksh.out
    syncrun=0
    read a
    break
    else
    echo "$PPS stale partitions currently found." |a\
    tee -a /var/tmp/availvg.ksh.out
    SYNC=`ps -ef | grep -v grep | grep "/usr/sbin/syncvg" | \
    awk '{print $9}'`
    if [ "$SYNC" = "/usr/sbin/syncvg" ]
    then
    break
    else
    echo "Press enter when power, cables etc checked." | \
    tee -a /var/tmp/availvg.ksh.out
    read ans
    echo "Varyonvg started..." >> \
    /var/tmp/availvg.ksh.out 2>&1
    (time /usr/sbin/varyonvg availvg >> \
    /var/tmp/availvg.ksh.out 2>&1) 2>> \
    /var/tmp/availvg.ksh.out &
    syncrun=1
    break
    fi
    fi
    done
    done

  2. Before we execute the script let us look at the status of the logical volumes using the lsvg -l availvg command:
    # lsvg -l availvg
    availvg:
    LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
    availlv jfs 6 12 2 open/syncd /availjfs
    loglv00 jfslog 1 2 2 open/syncd N/A

    The output shows that the logical volumes availlv and loglv00 are open and synchronized, (open/syncd).

  3. Now execute the script availvg.ksh, as follows:
    # ksh availvg.ksh
    Checking for stale partitions.
    Please wait...
    Stale partitions not found.
    To recreate stale partitions power off one disk unit
    and press enter. To quit press CTRL-C.

  4. At this point power off physical volume hdisk6 and then press Enter.

    Since the file copy operation is started again after switching off the disk, we now have stale partitions. This is shown by the following output:


    Checking for stale partitions.
    Please wait...
    7 stale partitions currently found.
    Press enter when power, cables etc checked.

  5. From another terminal let us look at the state of the logical volumes in availvg before we power on hdisk6.
    # lsvg -l availvg
    availvg:
    LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
    availlv jfs 6 12 2 open/stale /availjfs
    loglv00 jfslog 1 2 2 open/stale N/A

    We now note that both availlv and loglv00 are marked stale. To get more detailed information about the particular partitions that have become stale execute the command lslv -p hdisk6 availlv. This command is described in Storage Management Files and Commands Summary.

  6. Now power on hdisk6 and press Enter. The following output is produced:
    Varyonvg started...
    Checking for stale partitions.
    Please wait...
    6 stale partitions currently found.
    Checking for stale partitions.
    Please wait...
    5 stale partitions currently found.
    Checking for stale partitions.
    Please wait...
    4 stale partitions currently found.
    Checking for stale partitions.
    Please wait...
    3 stale partitions currently found.
    Checking for stale partitions.
    Please wait...
    2 stale partitions currently found.
    Checking for stale partitions.
    Please wait...
    1 stale partitions currently found.
    Checking for stale partitions.
    Please wait...
    Stale partitions not found.
    To recreate stale partitions power off one disk unit
    and press enter. To quit press Ctrl-C.

  7. At this point press Ctrl-C to exit the shell script.

    During the availability verification test the file copy command continues to run, even while the varyonvg command is executing, to synchronize stale partitions. This is deliberately done to simulate continuous I/O activity which would occur in a production system.

  8. Since the test is now complete, let us look at the state of the logical volumes using the lsvg -l availvg command.
    # lsvg -l availvg
    availvg:
    LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
    availlv jfs 6 12 2 open/syncd /availjfs
    loglv00 jfslog 1 2 2 open/syncd N/A

    As expected, the logical volume partitions have all been synchronized. From the results of our test we can conclude that mirroring of a non-rootvg volume group can be carried out with ease, and provides much higher availability than a non-mirrored volume group. As we saw in our test, I/O to the good physical volume continues during the disk failure.

A Design Example for Improved Performance

First, create perfvg that contains hdisk3 and hdisk5 using the same procedure as for the creation of availvg.

  1. Execute smitty vg
  2. Select Add a Volume Group
  3. Type perfvg for VOLUME GROUP name
  4. Type hdisk3 hdisk5 for PHYSICAL VOLUME names
  5. Press Enter=Do and then F10=Exit when smit returns an OK prompt
  6. Remember to ensure that perfvg is synchronized when you have finished creating all the logical volumes in it, if some of them are mirrored. This procedure is discussed just before How to Document the Volume Group Design.

Command Line Summary

In this section we will be creating two mapped mirrored logical volumes with different performance characteristics, and then two mapped non-mirrored logical volumes, also with differing performance characteristics. We will then create a jfs log logical volume and a paging logical volume, before documenting our design. This summary will take you through the steps you would need to follow at the command line:

  1. Create two mapped mirrored logical volumes:
    • Create a logical volume with poor performance characteristics using the following command:
      # mklv -y'perflv1' -d's' -m'/home/maps/badmir.map' 'perfvg' '10'

      This creates a logical volume of size 10 logical partitions in the perfvg volume group. Scheduling will be done sequentially, and mirror write consistency is on. The physical partitions will be allocated according to the map file badmir.map.

    • Add the mirrored copy:
      # mklvcopy -m'/home/maps/badmir.map2' 'perflv1' '2'

      This creates a copy of the physical partitions using the map file badmir.map2 to allocate partitions.

    • Create a logical volume with good performance characteristics using the following command:
      # mklv -y'perflv2' -w'n' -m'/home/maps/goodmir.map' 'perfvg' '10

      This creates a logical volume of size 10 logical partitions in the perfvg volume group. Scheduling will be done in parallel, and mirror write consistency is off. The physical partitions will be allocated according to the map file goodmir.map.

    • Add the mirrored copy:
      # mklvcopy -m'/home/maps/goodmir.map2' 'perflv2' '2'

      This creates a copy of the physical partitions using the map file goodmir.map2 to allocate partitions.

  2. Create two mapped non-mirrored logical volumes:
    • Create a logical volume with poor performance characteristics using the following command:
      # mklv -y'perflv4' -m'/home/maps/inedge.map' 'perfvg' '10'

      This creates a logical volume of size 10 logical partitions, the physical partitions being allocated according to the map in inedge.map.

    • Create a logical volume with good performance characteristics using the following command:
      # mklv -y'perflv3' -m'/home/maps/center.map' 'perfvg' '10'

      This creates a logical volume of size 10 logical partitions, the physical partitions being allocated according to the map in center.map.

  3. Create a jfslog logical volume:
    • Create the logical volume using the following command:
      # mklv -y'perflog' -t'jfslog' -a'c' 'perfvg' '1' 'hdisk5'

      This will create a jfslog logical volume of size 4MB, located in the center partitions of the disk hdisk5 in the perfvg volume group.

    • Format the jfslog using the following command:
      # /usr/sbin/logform /dev/perflog
      logform: destroy /dev/perflog (y)?
      #

      This initializes the jfslog logical volume for use.

  4. Create a paging logical volume:
    • Create the logical volume using the following command:
      # mklv -y'perfpg' -t'paging' -a'c' -e'x' -c'2' -w'n' 'perfvg' '5

      This will create a paging space logical volume of size 5 logical partitions, using physical partitions located in the center of the disk for maximum performance. A mirrored copy will be created, and mirror write consistency will be set to off. The maximum number of disks possible will also be used to maximize performance.

    • Ensure the paging space will be activated at each system reboot using the following command:
      # chps -a'y' 'perfps'

    • Activate the new paging space using the following command:
      # swapon /dev/'perfps'

      This causes the system to begin using the new page space.

  5. Synchronize the volume group:

    When the following command exits, check that any commands that it calls, such as syncvg, have also exited.


    varyonvg perfvg

  6. Document the volume group design:

    Create two files:

    1. /home/vginfo/vg.detail to contain a complete detailed partition map from the lsvg command:
      lsvg -M perfvg > /home/vginfo/vg.detail

    2. /home/vginfo/vg.summary to contain a summary partition map from the lspv command:
      • Save logical volume description for hdisk3:
        lspv -l hdisk3 > /home/vginfo/vg.summary

      • Save physical partitions description for hdisk3:
        lspv -p hdisk3 >> /home/vginfo/vg.summary

      • Save logical volume description for hdisk5:
        lspv -l hdisk5 >> /home/vginfo/vg.summary

      • Save physical partitions description for hdisk5:
        lspv -p hdisk5 >> /home/vginfo/vg.summary

    3. Refer to An Example Description of a Volume Group Design for the output files we obtained.

The performance characteristics of this volume group will be investigated in the detailed guidance section that follows.

Detailed Guidance

This section will now look at these processes in detail:

How to Create Two Mirrored Logical Volumes

This section shows how to create two logical volumes which have different attribute settings for those attributes that significantly affect performance in a mirrored environment. The different attributes, described by their smit field name, are:

The two logical volumes are:


**** Warning - Choose attributes carefully ****

It is very important to note that when the above attributes are set to give optimal performance, the availability of the good mirror perflv2 suffers. Hence, this choice between performance and availability is a good example of the design decisions that you will have to make.


Let's create two mirrored logical volumes; one whose attributes should give good performance, and one whose attributes should give bad performance. Start by creating only the primary copy so that allocation maps can be used.


**** If you want to avoid map files ****

Please refer to How to Create a Paging Type Logical Volume for an example of how to create a mirrored logical volume (two copies) with optimal performance attributes, that does not use a physical partition allocation map file.


The good logical volume, perflv2, will use goodmir.map, and the bad logical volume, perflv1, will use badmir.map. These map files were displayed earlier in Storage Subsystem Design.

For the bad mirror:

  1. Type smitty lv.
  2. Select Add a Logical Volume.
  3. Type perfvg and press Enter=Do, or select perfvg using F4=List.
  4. Type the logical volume name, such as perflv1.
  5. Type the number of logical partitions to allocate for this logical volume; in this case type 10.
  6. Leave the Number of COPIES of each logical partition set to the default of 1 since we'll add the second copy later.
  7. Leave the Mirror Write Consistency? set as yes. This will only have meaning once we create copies. It will then result in an extra disk I/O operation to the edge of the disk where the Mirror Write Consistency data is stored. This extra I/O will thus decrease performance.

    The smit screen at this stage looks like:


     
    Add a Logical Volume

    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [TOP] [Entry Fields]
    Logical volume NAME [perflv1]
    * VOLUME GROUP name perfvg
    * Number of LOGICAL PARTITIONS [10] #
    PHYSICAL VOLUME names [] +
    Logical volume TYPE []
    POSITION on physical volume outer_middle +
    RANGE of physical volumes minimum +
    MAXIMUM NUMBER of PHYSICAL VOLUMES [] #
    to use for allocation
    Number of COPIES of each logical 1 +
    partition
    Mirror Write Consistency? yes +
    Allocate each logical partition copy yes +
    on a SEPARATE physical volume?
    [MORE...9]
    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

  8. Press the Page Down or Down Arrow key to get to the bottom of the next page.
  9. Type in the path name of the allocation map file /home/maps/badmir.map. This file specifies 10 contiguous inner middle physical partitions to be used.
  10. Change the SCHEDULING POLICY for writing logical partition copies from the default of parallel to sequential by using the Tab key to toggle the value. This will ensure that all updates to mirror copies will occur in sequence, which will obviously be slower than parallel writes.
  11. Leave Enable WRITE VERIFY? as the default, yes (again, this is the slower option), so that the screen looks like:
                                  Add a Logical Volume

    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [MORE...9] [Entry Fields]
    Number of COPIES of each logical 1 +
    partition
    Mirror Write Consistency? yes +
    Allocate each logical partition copy yes +
    on a SEPARATE physical volume?
    RELOCATE the logical volume during reorganization? yes +
    Logical volume LABEL []
    MAXIMUM NUMBER of LOGICAL PARTITIONS [128]
    Enable BAD BLOCK relocation? yes +
    SCHEDULING POLICY for writing logical sequential +
    partition copies
    Enable WRITE VERIFY? yes +
    File containing ALLOCATION MAP [/home/maps/badmir.map]
    Stripe Size? [Not Striped] +

    [BOTTOM]

    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

  12. Press the Enter=Do key to create the logical volume.
  13. When smit returns OK, press F3=Cancel to return to the Logical Volumes menu.

Now create the primary copy of the mirrored logical volume with good mirroring performance attributes.

  1. Follow the same process as for the bad performance mirror example. Start by again selecting Add a Logical Volume:
                                  Add a Logical Volume

    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [TOP] [Entry Fields]
    Logical volume NAME [perflv2]
    * VOLUME GROUP name perfvg
    * Number of LOGICAL PARTITIONS [10] #
    PHYSICAL VOLUME names [] +
    Logical volume TYPE []
    POSITION on physical volume outer_middle +
    RANGE of physical volumes minimum +
    MAXIMUM NUMBER of PHYSICAL VOLUMES [] #
    to use for allocation
    Number of COPIES of each logical 1 +
    partition
    Mirror Write Consistency? no +
    Allocate each logical partition copy yes +
    on a SEPARATE physical volume?
    [MORE...9]

    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

  2. This time type perflv2 as the name of the logical volume.
  3. Use the Tab key to toggle Mirror Write Consistency? to no.This will reduce disk movement for each I/O request and so it should result in better performance.
  4. Leave the other fields with their defaults and press the Page Down key.
  5. Leave the Number of COPIES of each logical partition set to the default of 1 since we'll add the second copy later.
  6. The only field that now requires alteration is the File containing ALLOCATION MAP field, where you should type /home/maps/goodmir.map.
  7. Press the Enter=Do key on the screen that now looks like:
                                  Add a Logical Volume

    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [MORE...9] [Entry Fields]
    Number of COPIES of each logical 1 +
    partition
    Mirror Write Consistency? no +
    Allocate each logical partition copy yes +
    on a SEPARATE physical volume?
    RELOCATE the logical volume during reorganization? yes +
    Logical volume LABEL []
    MAXIMUM NUMBER of LOGICAL PARTITIONS [128]
    Enable BAD BLOCK relocation? yes +
    SCHEDULING POLICY for writing logical parallel +
    partition copies
    Enable WRITE VERIFY? no +
    File containing ALLOCATION MAP [/home/maps/goodmir.map]
    Stripe Size? [Not Striped] +
    [BOTTOM]

    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

  8. When smit returns OK, press F3=Cancel to return to the Logical Volumes menu.

We now have two single copy logical volumes whose attributes, when mirroring is implemented, will either give bad or optimal performance. Map files will also be used to create the copies.

For the good mirror:

  1. Select Set Characteristic of a Logical Volume from the following screen:
                                    Logical Volumes

    Move cursor to desired item and press Enter.

    List All Logical Volumes by Volume Group
    Add a Logical Volume
    Set Characteristic of a Logical Volume
    Show Characteristics of a Logical Volume
    Remove a Logical Volume
    Copy a Logical Volume














    F1=Help F2=Refresh F3=Cancel F8=Image
    F9=Shell F10=Exit Enter=Do

  2. Select Add a Copy to a Logical Volume.
  3. Type perflv2 and press Enter=Do.
  4. Use the Tab key to change NEW TOTAL number of logical partition copies to 2.
  5. Leave the field SYNCHRONIZE the data in the new logical partition copies? with its default of no since we'll synchronize it later, just before section How to Document the Volume Group Design.
  6. Change NEW TOTAL number of logical partition copies to 2.
  7. Type /home/maps/goodmir.map2 in the File containing ALLOCATION MAP field. This map uses physical partitions on the second disk so that parallel disk I/O should give better performance. The screen should look like:
                             Add Copies to a Logical Volume

    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [Entry Fields]
    * LOGICAL VOLUME name perflv2
    * NEW TOTAL number of logical partition 2 +
    copies
    PHYSICAL VOLUME names [] +
    POSITION on physical volume outer_middle +
    RANGE of physical volumes minimum +
    MAXIMUM NUMBER of PHYSICAL VOLUMES [32] #
    to use for allocation
    Allocate each logical partition copy yes +
    on a SEPARATE physical volume?
    File containing ALLOCATION MAP <home/maps/goodmir.m
    SYNCHRONIZE the data in the new no +
    logical partition copies?



    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

  8. Press the Enter=Do key to create the logical volume copy.
  9. When smit returns OK, press F3=Cancel to return to the Set Characteristic of a Logical Volume menu.

Now create the bad mirror copy:

  1. Select Add a Copy to a Logical Volume.
  2. Type perflv1 and press Enter=Do.
  3. Use the Tab key to change NEW TOTAL number of logical partition copies to 2.
  4. Leave the field SYNCHRONIZE the data in the new logical partition copies? with its default of no since we'll synchronize it later, just before section How to Document the Volume Group Design.
  5. Type /home/maps/badmir.map2 in the File containing ALLOCATION MAP field. This map uses physical partitions on the same disk as the primary copy so that, along with the sequential disk I/O, we should get the worst performance.
  6. Press the Enter=Do key to create the logical volume copy.
  7. When smit returns OK, press F10=Exit to return to the command prompt.
How to Create Two Mapped Non-mirrored Logical Volumes

This section shows how to create two logical volumes which have different logical partition locations on hdisk3 and hdisk5. This enables us to investigate:

The two logical volumes are:


**** Warning - Choose carefully ****

It is very important to note that when the above attributes are set to give optimal performance, the availability of a logical volume, even when it is not mirrored, and thus exists as only a single copy, may suffer. For example, if your map file uses all disks in a volume group, or if the Inter-Physical Volume Allocation Policy is set to maximum, then although the extra disk heads may reduce data access time, access to a logical volume may become difficult or impossible if any disk fails.

We could have also degraded performance but improved the reliability of a disk write operation by changing Enable WRITE VERIFY? from its no default value to yes This attribute is not investigated in this example.


Let's create both perflv3 and perflv4 using the map files that you can create using your favorite editor, such as the vi text editor.


**** If you want to avoid map files ****

Please refer to How to Create a Journal Log Type Logical Volume, for an example of how to create a non-mirrored logical volume (one copy) with optimal performance attributes, that does not use a physical partition allocation map file.


Since we're using map files, create these logical volumes using the smit defaults, and once you've specified a map file, you only need to specify the Number of LOGICAL PARTITIONS and the Logical volume NAME (note that Mirror Write Consistency does not apply when only a single copy of a logical volume exists; that is, there is no mirroring, and hence we can ignore this field).

To create perflv4:

  1. Type smitty mklv to get to the menu whose title is Add a Logical Volume.
  2. Type perfvg in the field VOLUME GROUP name and press the Enter=Do key, or use F4=List to select it.
  3. Type 10 in the field Number of LOGICAL PARTITIONS.
  4. Type perflv4 in the field Logical volume NAME so that the screen looks like:
                                  Add a Logical Volume

    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [TOP] [Entry Fields]
    Logical volume NAME [perflv4]
    * VOLUME GROUP name perfvg
    * Number of LOGICAL PARTITIONS [10] #
    PHYSICAL VOLUME names [] +
    Logical volume TYPE []
    POSITION on physical volume outer_middle +
    RANGE of physical volumes minimum +
    MAXIMUM NUMBER of PHYSICAL VOLUMES [] #
    to use for allocation
    Number of COPIES of each logical 1 +
    partition
    Mirror Write Consistency? yes +
    Allocate each logical partition copy yes +
    on a SEPARATE physical volume?
    [MORE...9]

    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

  5. Press the Page Down or Down Arrow Key to get to the bottom of the next page.
  6. Type the map file path name, such as /home/maps/inedge.map, in the field File containing ALLOCATION MAP so that the screen looks like:
                                  Add a Logical Volume

    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [MORE...9] [Entry Fields]
    Number of COPIES of each logical 1 +
    partition
    Mirror Write Consistency? yes +
    Allocate each logical partition copy yes +
    on a SEPARATE physical volume?
    RELOCATE the logical volume during reorganization? yes +
    Logical volume LABEL []
    MAXIMUM NUMBER of LOGICAL PARTITIONS [128]
    Enable BAD BLOCK relocation? yes +
    SCHEDULING POLICY for writing logical parallel +
    partition copies
    Enable WRITE VERIFY? no +
    File containing ALLOCATION MAP [/home/maps/inedge.map]
    Stripe Size? [Not Striped] +
    [BOTTOM]

    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image

  7. Press the Enter=Do key to create the logical volume.
  8. When smit returns OK, press F3=Cancel to return to the command prompt.

To create perflv3:

  1. Type smitty mklv to get to the menu whose title is Add a Logical Volume.
  2. Type perfvg in the field VOLUME GROUP name and press the Enter=Do key, or use F4=List to select it.
  3. Type 12 in the field Number of LOGICAL PARTITIONS; 12 physical partitions allows us to place three partition pairs on each disk.
  4. Type perflv3 in the field Logical volume NAME so that the screen looks like:
                                  Add a Logical Volume

    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [TOP] [Entry Fields]
    Logical volume NAME [perflv3]
    * VOLUME GROUP name perfvg
    * Number of LOGICAL PARTITIONS [12] #
    PHYSICAL VOLUME names [] +
    Logical volume TYPE []
    POSITION on physical volume outer_middle +
    RANGE of physical volumes minimum +
    MAXIMUM NUMBER of PHYSICAL VOLUMES [] #
    to use for allocation
    Number of COPIES of each logical 1 +
    partition
    Mirror Write Consistency? yes +
    Allocate each logical partition copy yes +
    on a SEPARATE physical volume?
    [MORE...9]

    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

  5. Press the Page Down or Down Arrow key to get to the bottom of the next page.
  6. Type the map file path name, such as /home/maps/centre.map, in the field File containing ALLOCATION MAP, so that the screen looks like:
                                  Add a Logical Volume

    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [MORE...9] [Entry Fields]
    Number of COPIES of each logical 1 +
    partition
    Mirror Write Consistency? yes +
    Allocate each logical partition copy yes +
    on a SEPARATE physical volume?
    RELOCATE the logical volume during reorganization? yes +
    Logical volume LABEL []
    MAXIMUM NUMBER of LOGICAL PARTITIONS [128]
    Enable BAD BLOCK relocation? yes +
    SCHEDULING POLICY for writing logical parallel +
    partition copies
    Enable WRITE VERIFY? no +
    File containing ALLOCATION MAP [/home/maps/centre.map]
    Stripe Size? [Not Striped] +
    [BOTTOM]

    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image

  7. Press the Enter=Do key to create the logical volume.
  8. When smit returns OK, press F3=Cancel to return to the command prompt.
How to Create a Journal Log Type Logical Volume

This section shows how to create a jfslog logical volume that can be used by one or more AIX Version 4 journaled file systems. You may want to do this to improve your system's performance, since the log can be placed on the center region of the fastest disk in your volume group.

Create the journaled file system log device before any journaled file system is created in the volume group. Otherwise a default device, such as loglv01, will be created automatically. In this example, we'll create perflog before we create any journaled file systems in the perfvg volume group.

For more information, refer to the AIX Version 4.1 Hypertext Information Base Library article Create a File System Log on a Dedicated Disk for a User-Defined volume group.

To create perflog:

  1. Type smitty mklv to get to the menu whose title is Add a Logical Volume.
  2. Type perfvg in the field VOLUME GROUP name and press the Enter=Do key, or use F4=List to select it.
  3. Type perflog in the field Logical volume NAME.
  4. Type 1 in the field Number of LOGICAL PARTITIONS.
  5. Type hdisk5 in the field PHYSICAL VOLUME names, or use F4=List to select it.
  6. Type jfslog in the field Logical volume TYPE. Note that there is no select option available here.
  7. Use the Tab key to toggle POSITION on physical volume to the center setting so that the screen looks like:
                                  Add a Logical Volume

    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [TOP] [Entry Fields]
    Logical volume NAME [perflog]
    * VOLUME GROUP name perfvg
    * Number of LOGICAL PARTITIONS [1] #
    PHYSICAL VOLUME names [hdisk5] +
    Logical volume TYPE [jfslog]
    POSITION on physical volume center +
    RANGE of physical volumes minimum +
    MAXIMUM NUMBER of PHYSICAL VOLUMES [] #
    to use for allocation
    Number of COPIES of each logical 1 +
    partition
    Mirror Write Consistency? yes +
    Allocate each logical partition copy yes +
    on a SEPARATE physical volume?
    [MORE...9]

    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

  8. We can execute this command with the rest of the fields left with their default values, since most fields do not affect a logical volume that consists of one physical partition. This logical volume can only exist as one copy on one disk. Hence, instead of pressing the Page Down key to go to the next screen, press the Enter=Do key to create the logical volume.
  9. When smit returns OK, press F3=Cancel to return to the command prompt.
  10. We now need to format the newly created journaled file system log device perflog with the following command:
    # /usr/sbin/logform /dev/perflog
    logform: destroy /dev/perflog (y)?
    #

    The following example illustrates that this command should not damage the data in a clean (in other words, fsck has been used), unmounted journaled file system. It just initializes the journaled file system log device, so that it can record the changes to the pointers that reference the data stored in a journaled file system.


    **** Warning - Use logform carefully ****

    For more information, refer to the AIX Version 4.1 Hypertext Information Base Library article Create a File System Log on a Dedicated Disk for a User-Defined volume group, and also refer to the logform command in the AIX Version 4.1 Commands Reference.



    # lsvg -l vgname
    LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
    perflog jfslog 1 1 1 closed/syncd N/A
    lv04 jfs 1 1 1 closed/syncd /ritest2
    # /usr/sbin/logform /dev/perflog
    logform: destroy /dev/perflog (y)?
    # mount /ritest2
    # cp /etc/motd /ritest2
    # ls -la /ritest2
    total 24
    drwxr-sr-x 2 sys sys 512 Jul 20 16:51 .
    drwxr-xr-x 29 bin bin 1024 Jul 20 15:29 ..
    -r-xr--r-- 1 root sys 880 Jul 20 16:51 motd
    # umount /ritest2
    # /usr/sbin/logform /dev/perflog
    logform: destroy /dev/perflog (y)?
    # mount /ritest2
    # ls -la /ritest2
    total 24
    drwxr-sr-x 2 sys sys 512 Jul 20 16:51 .
    drwxr-xr-x 29 bin bin 1024 Jul 20 15:29 ..
    -r-xr--r-- 1 root sys 880 Jul 20 16:51 motd
    # lsvg -l vgname
    LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
    perflog jfslog 1 1 1 open/syncd N/A
    lv04 jfs 1 1 1 open/syncd /ritest2

    The logical volume perflog is now ready to be used.

How to Create a Paging Type Logical Volume

This section shows how to create a mirrored paging device in a non-rootvg volume group with attributes that give optimal performance. You may wish to do this for memory intensive applications that will potentially result in a lot of I/O to the paging logical volumes.

Execute smitty pgsp so your screen looks like:


                                  Paging Space

Move cursor to desired item and press Enter.

List All Paging Spaces
Add Another Paging Space
Change / Show Characteristics of a Paging Space
Remove a Paging Space
Activate a Paging Space











F1=Help F2=Refresh F3=Cancel F8=Image
F9=Shell F10=Exit Enter=Do

and then select Add Another Paging Space to display:


                            Add Another Paging Space

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[Entry Fields]
Volume group name perfvg
SIZE of paging space (in logical partitions) [5] #
PHYSICAL VOLUME name +
Start using this paging space NOW? yes +
Use this paging space each time the system is yes +
RESTARTED?










F1=Help F2=Refresh F3=Cancel F4=List
F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

You can easily see these menu choices provide no control over the placement of the paging space logical partition, nor do they allow us to create multiple copies of the paging logical volume.

We want to use all the disks in the volume group to provide more heads to respond to access requests, so we'll use a maximum range. We'll leave the scheduling policy as parallel so that all disks can handle I/O requests for this logical volume simultaneously. We'll also specify the center disk region and turn off Mirror Write Consistency? to minimize disk activity, since now the disk heads will have a better chance of being able to stay near the center of the disk platters during an I/O request. Hence, use the familiar logical volume creation method as follows:

  1. Type smitty mklv to get to the menu whose title is Add a Logical Volume.
  2. Type perfvg in the field VOLUME GROUP name and press the Enter=Do key, or use F4=List to select it.
  3. Type perfpg in the field Logical volume NAME.
  4. Type 5 in the field Number of LOGICAL PARTITIONS.
  5. Type paging in the field Logical volume TYPE. Note that there is no select option available here.
  6. Use the Tab key to toggle POSITION on physical volume to the center setting.
  7. Use the Tab key to toggle RANGE of physical volumes to the maximum setting.
  8. Use the Tab key to toggle Number of COPIES of each logical partition to a value of 2. This will result in the creation of both a primary and secondary copy of the perfpg logical volume.
  9. Use the Tab key to toggle Mirror Write Consistency? to the no setting so that your screen looks like:
                                  Add a Logical Volume

    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [TOP] [Entry Fields]
    Logical volume NAME [perfpg]
    * VOLUME GROUP name perfvg
    * Number of LOGICAL PARTITIONS [5] #
    PHYSICAL VOLUME names [] +
    Logical volume TYPE [paging]
    POSITION on physical volume center +
    RANGE of physical volumes maximum +
    MAXIMUM NUMBER of PHYSICAL VOLUMES [] #
    to use for allocation
    Number of COPIES of each logical 2 +
    partition
    Mirror Write Consistency? no +
    Allocate each logical partition copy yes +
    on a SEPARATE physical volume?
    [MORE...9]

    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

  10. We can execute this command with the rest of the fields left with their default values. Hence, instead of pressing Page Down to go to the next screen, press the Enter=Do key to create the logical volume.
  11. When smit returns OK, press F3=Cancel to return to the command prompt.
  12. Now execute smitty pgsp, but this time select Change / Show Characteristics of a Paging Space.
  13. Move your cursor to highlight perfpg and then press the Enter=Do.
  14. Use the Tab key to toggle Use this paging space each time the system is RESTARTED? from no to yes so that your screen looks like:
                    Change / Show Characteristics of a Paging Space

    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.
    [Entry Fields]
    Paging space name perfpg
    Volume group name perfvg
    Physical volume name hdisk5
    NUMBER of additional logical partitions [] #
    Use this paging space each time the system is yes +
    RESTARTED?












    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

  15. Press the Enter=Do key to change the paging space.
  16. When smit returns OK, press F3=Cancel to return to the Paging Space menu.
  17. To immediately start to use the new paging device, you can now:
Synchronize the Volume Group

The final step that we need to do is to synchronize perfvg. The parts of this step, similar to that described in How to Synchronize rootvg, are:

  1. Execute the command smitty vg.
  2. Select Activate a Volume Group.
  3. Type perfvg or use F4=List to select it.
  4. Press the Enter=Do key.
  5. Press F10=Exit when smit returns an OK prompt.

If we have to create copies of many small logical volumes, it is more efficient for the systems administrator to use one command after hours to synchronize them. This means that the configuration work can be done during normal business hours without any significant I/O burdens to normal operations.

How to Document the Volume Group Design

Now that we've created perfvg, we can choose some of the commands discussed in Storage Management Files and Commands Summary to enable us to:

If you refer to the Storage Management Files and Commands Summary chapter, you can see that the commands:

are quite simple since they only have a few flags. Hence we prefer to execute these commands directly from the command line rather than through the smit interface. Therefore, although the following is a brief summary of how to use the correct smit options, you may prefer to follow the simple method outlined in the previous command summary to enable you to document your volume group configuration.

You should also note that the most comprehensive volume group command, lsvg -M vgname, does not have a smit interface and hence must be executed from the command line. It can also produce a long output for a large volume group with many physical volumes in it. A smaller summarized version of its output can be obtained by using the lspv command for each disk in the volume group, which you can do using the method below. Note that you could also use lslv, but in this case, we shall use lspv because we only have two disks compared to six logical volumes, so we only have to execute lspv twice to get a complete description of perfvg.

Execute lspv from smit using the following procedure (we will also show you how you can execute lsvg and lslv from smit):

  1. To get to the menu with the title Logical Volume Manager, execute the command smitty lvm, or, if you do not like to use a fastpath:
    1. Execute the command smitty.
    2. Select System Storage Management (Physical & Logical Storage).
    3. Select Logical Volume Manager.
  2. If you want to use the lsvg command:
    1. Select Volume Groups.
    2. Select List Contents of a Volume Group.
    3. Type perfvg and press the Enter=Do key, or use F4=List to select it.
    4. For the field List OPTION, press the F4=List key to display a screen like:
                              List Contents of a Volume Group

      Type or select values in entry fields.
      Press Enter AFTER making all desired changes.

      [Entry Fields]
      * VOLUME GROUP name [perfvg]
      List OPTION status +





      __________________________________________________________________________
      | List OPTION |
      | |
      | Move cursor to desired item and press Enter. |
      | |
      | status |
      | logical volumes |
      | physical volumes |
      | |
      | F1=Help F2=Refresh F3=Cancel |
      F1| F8=Image F10=Exit Enter=Do |
      F5| /=Find n=Find Next |
      F9|_________________________________________________________________________|

      • If you move the cursor to highlight status and press Enter=Do twice, the F6=Command shows you that the displayed output is for the command lsvg perfvg.
      • If you move the cursor to highlight logical volumes and press Enter=Do twice, the F6=Command shows you that the displayed output is for the command lsvg -l perfvg.
      • If you move the cursor to highlight physical volumes and press Enter=Do twice, the F6=Command shows you that the displayed output is for the command lsvg -p perfvg.
    5. Press F10=Exit to return to the command prompt when you've finished reading the output.
    6. For an example of the output of lsvg, please refer to Storage Management Files and Commands Summary.
  3. If you want to use the lslv command:
    1. Select Logical Volumes.
    2. Select Show Characteristics of a Logical Volume.
    3. Type perlv1 and press the Enter=Do key, or use F4=List to select it.
    4. For the field List OPTION, press the F4=List key to display a screen like:
                          Show Characteristics of a Logical Volume

      Type or select values in entry fields.
      Press Enter AFTER making all desired changes.

      [Entry Fields]
      * LOGICAL VOLUME name [perflv1] +
      List OPTION status +





      __________________________________________________________________________
      | List OPTION |
      | |
      | Move cursor to desired item and press Enter. |
      | |
      | status |
      | physical volume map |
      | logical partition map |
      | |
      | F1=Help F2=Refresh F3=Cancel |
      F1| F8=Image F10=Exit Enter=Do |
      F5| /=Find n=Find Next |
      F9|_________________________________________________________________________|

      • If you move the cursor to highlight status and press Enter=Do twice, the F6=Command shows you that the displayed output is for the command lslv perflv1.
      • If you move the cursor to highlight physical volume map and press Enter=Do twice, the F6=Command shows you that the displayed output is for the command lslv -l perflv1.
      • If you move the cursor to highlight logical partition map and press Enter=Do twice, the F6=Command shows you that the displayed output is for the command lslv -m perflv1.
    5. For an example of the output of lslv, please refer to Storage Management Files and Commands Summary.
    6. Press F10=Exit to return to the command prompt when you've finished reading the output.
  4. For this volume group design, we can execute the following two smit commands to get the sample output that follows:
    1. Select Physical Volumes.
    2. Select List Contents of a Physical Volume.
    3. For hdisk3:
      1. Type hdisk3 and press the Enter=Do key, or use F4=List to select it.
      2. For the field List OPTION, press the F4=List key to display a screen like:
                               List Contents of a Physical Volume

        Type or select values in entry fields.
        Press Enter AFTER making all desired changes.

        [Entry Fields]
        PHYSICAL VOLUME name [hdisk3] +
        List OPTION status +





        __________________________________________________________________________
        | List OPTION |
        | |
        | Move cursor to desired item and press Enter. |
        | |
        | status |
        | logical volumes |
        | physical partitions |
        | |
        | F1=Help F2=Refresh F3=Cancel |
        F1| F8=Image F10=Exit Enter=Do |
        F5| /=Find n=Find Next |
        F9|_________________________________________________________________________|

      3. Move the cursor to highlight logical volumes and press enter=Do twice. The F6=Command shows you that the displayed output is for the command lspv -l hdisk3.
      4. Press F3=Cancel to return to the screen with the title List Contents of a Physical Volume when you've finished reading the output.
      5. For the field List OPTION, press the F4=List key again to bring up the same menu as shown above.
      6. Move the cursor to highlight physical partitions and press enter=Do twice. The F6=Command shows you that the displayed output is for the command lspv -p hdisk3.
      7. Press F3=Cancel to return to the screen with the title List Contents of a Physical Volume when you've finished reading the output.
    4. For hdisk5, repeat the steps described for hdisk3.
  5. Press F10=Exit to return to the command prompt when you've finished reading the output.
  6. Save the file /smit.log since it will contain the output of these lspv commands, which will be similar to those given in the next section.
An Example Description of a Volume Group Design

The lspv commands output in the /smit.log or /home/vginfo/vg.summary files for the physical volumes in perfvg should look like:


# lspv -l hdisk3
hdisk3:
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
perfpg 5 5 00..00..05..00..00 N/A
perflv3 6 6 00..00..06..00..00 N/A
perflv2 10 10 00..00..00..10..00 N/A
perflv4 5 5 00..00..00..00..05 N/A
# lspv -p hdisk3
hdisk3:
PP RANGE STATE REGION LV ID TYPE MOUNT POINT
1-15 free outer edge
16-30 free outer middle
31-34 free center
35-39 used center perfpg paging N/A
40-45 used center perflv3 jfs N/A
46-50 free inner middle
51-60 stale inner middle perflv2 jfs N/A
61-65 free inner edge
66-70 used inner edge perflv4 jfs N/A
71-75 free inner edge
#
# lspv -l hdisk5
hdisk5:
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
perflv1 10 20 00..20..00..00..00 N/A
perflv3 6 6 00..05..01..00..00 N/A
perflog 1 1 00..00..01..00..00 N/A
perfpg 5 5 00..00..05..00..00 N/A
perflv2 10 10 00..00..00..10..00 N/A
perflv4 5 5 00..00..00..00..05 N/A
# lspv -p hdisk5
hdisk5:
PP RANGE STATE REGION LV ID TYPE MOUNT POINT
1-58 free outer edge
59-90 free outer middle
91-100 stale outer middle perflv1 jfs N/A
101-110 used outer middle perflv1 jfs N/A
111-115 used outer middle perflv3 jfs N/A
116-116 used center perflv3 jfs N/A
117-117 used center perflog jfslog N/A
118-139 free center
140-144 used center perfpg paging N/A
145-172 free center
173-200 free inner middle
201-210 used inner middle perflv2 jfs N/A
211-229 free inner middle
230-232 free inner edge
233-237 used inner edge perflv4 jfs N/A
238-287 free inner edge
#

The lsvg -M perfvg command output in the /home/vginfo/vg.detail file should look like:


perfvg
hdisk3:1-34
hdisk3:35 perfpg:2:1
hdisk3:36 perfpg:4:1
hdisk3:37 perfpg:1:2
hdisk3:38 perfpg:3:2
hdisk3:39 perfpg:5:2
hdisk3:40 perflv3:11
hdisk3:41 perflv3:12
hdisk3:42 perflv3:7
hdisk3:43 perflv3:8
hdisk3:44 perflv3:3
hdisk3:45 perflv3:4
hdisk3:46-50
hdisk3:51 perflv2:1:2
hdisk3:52 perflv2:2:2
hdisk3:53 perflv2:3:2
hdisk3:54 perflv2:4:2
hdisk3:55 perflv2:5:2
hdisk3:56 perflv2:6:2
hdisk3:57 perflv2:7:2
hdisk3:58 perflv2:8:2
hdisk3:59 perflv2:9:2
hdisk3:60 perflv2:10:2
hdisk3:61-65
hdisk3:66 perflv4:6
hdisk3:67 perflv4:7
hdisk3:68 perflv4:8
hdisk3:69 perflv4:9
hdisk3:70 perflv4:10
hdisk3:71-75
hdisk5:1-90
hdisk5:91 perflv1:1:2
hdisk5:92 perflv1:2:2
hdisk5:93 perflv1:3:2
hdisk5:94 perflv1:4:2
hdisk5:95 perflv1:5:2
hdisk5:96 perflv1:6:2
hdisk5:97 perflv1:7:2
hdisk5:98 perflv1:8:2
hdisk5:99 perflv1:9:2
hdisk5:100 perflv1:10:2

the long output continues like this:


hdisk5:101      perflv1:1:1
hdisk5:102 perflv1:2:1
hdisk5:103 perflv1:3:1
hdisk5:104 perflv1:4:1
hdisk5:105 perflv1:5:1
hdisk5:106 perflv1:6:1
hdisk5:107 perflv1:7:1
hdisk5:108 perflv1:8:1
hdisk5:109 perflv1:9:1
hdisk5:110 perflv1:10:1
hdisk5:111 perflv3:1
hdisk5:112 perflv3:2
hdisk5:113 perflv3:5
hdisk5:114 perflv3:6
hdisk5:115 perflv3:9
hdisk5:116 perflv3:10
hdisk5:117 perflog:1
hdisk5:118-139
hdisk5:140 perfpg:1:1
hdisk5:141 perfpg:3:1
hdisk5:142 perfpg:5:1
hdisk5:143 perfpg:2:2
hdisk5:144 perfpg:4:2
hdisk5:145-200
hdisk5:201 perflv2:1:1
hdisk5:202 perflv2:2:1
hdisk5:203 perflv2:3:1
hdisk5:204 perflv2:4:1
hdisk5:205 perflv2:5:1
hdisk5:206 perflv2:6:1
hdisk5:207 perflv2:7:1
hdisk5:208 perflv2:8:1
hdisk5:209 perflv2:9:1
hdisk5:210 perflv2:10:1
hdisk5:211-232
hdisk5:233 perflv4:1
hdisk5:234 perflv4:2
hdisk5:235 perflv4:3
hdisk5:236 perflv4:4
hdisk5:237 perflv4:5
hdisk5:238-287

How to Test the Performance of the Design

This section gives an example of how you can obtain an indication of what effect the different attributes can have when you create a logical volume.


**** Warning - Your results will be different ****

Of course, every site may have its unique features, such as different hardware, different I/O requests, and different system loads, which may result in different results for you if you try the following commands.


You can do a simple test by just copying a very large file from the same fixed location on another volume group to each of:

To do this, we first need to create a journaled file system on each of these logical volumes:

  1. To create /perfjfs1 on perflv1:
    1. Execute smitty jfs to get the Journaled File Systems menu.
    2. Select Add a Journaled File System on a Previously Defined Logical Volume.
    3. For the field LOGICAL VOLUME name, use the F4=List key to select perflv1.
    4. Type /perfjfs1 in the field MOUNT POINT.
    5. Change the field Mount AUTOMATICALLY at system restart? to yes by using the Tab key.
    6. Leave the other fields with their default values so that the screen looks like:
             Add a Journaled File System on a Previously Defined Logical Volume


      Type or select values in entry fields.
      Press Enter AFTER making all desired changes.

      [Entry Fields]
      * LOGICAL VOLUME name perflv1 +

      * MOUNT POINT [/perfjfs1]
      Mount AUTOMATICALLY at system restart? yes +
      PERMISSIONS read/write +
      Mount OPTIONS [] +
      Start Disk Accounting? no +
      Fragment Size (bytes) 4096 +
      Number of bytes per inode 4096 +
      Compression algorithm no +






      F1=Help F2=Refresh F3=Cancel F4=List
      F5=Reset F6=Command F7=Edit F8=Image
      F9=Shell F10=Exit Enter=Do

    7. Press the Enter=Do key.
    8. Press the F3=Cancel key when smit returns an OK message to return to the menu with the title Add a Journaled File System on a Previously Defined Logical Volume.
  2. For perflv2, repeat the above to create the /perflv2 journaled file system.
  3. For perflv3, repeat the above to create the /perflv3 journaled file system.
  4. For perflv4, repeat the above to create the /perflv4 journaled file system.

Now we need to check that we have a suitable source file, and copy it four times to each of the newly created journaled file systems. Use the timex command to record the time required by the cp copy command. If you have access to AIX Version 4.1 Hypertext Information Base Library, then you may be able to do the following (note that our test used files from a copy of InfoExplorer that was loaded on a physical volume).


# cd /usr/lpp/info/lib/en_US/aix41
# ls -l cmds/cmds.romm
-rw-r--r-- 1 root system 21805056 May 12 09:33 cmds/cmds.rom
# timex cp cmds/cmds.rom manage/manage.rom /perfjfs1

real 59.53
user 0.35
sys 9.14
# timex cp cmds/cmds.rom manage/manage.rom /perfjfs2

real 34.15
user 0.40
sys 10.64
# timex cp cmds/cmds.rom manage/manage.rom /perfjfs3

real 28.62
user 0.37
sys 10.36
# timex cp cmds/cmds.rom manage/manage.rom /perfjfs4

real 28.66
user 0.42
sys 9.99

You may wish to use a command such as iostat 5 | tee iostat.perfjfsx to monitor disk activity during each of these commands, which is what we did.

Let's now look at these results for the mirrored and non-mirrored tests:

  1. Two copy logical volume tests - perflv1 and perflv2.

    The 25 second difference clearly indicates the cost required to create a highly available logical volume. As discussed in Storage Subsystem Design, and in the AIX V3.2 Performance Monitoring and Tuning Guide, the following options will degrade performance during a write operation:

    You can see the effect of the copy location data in the following output from the iostat command:

  2. Single copy logical volume tests - perflv3 and perflv4.

    The main difference in the maps is that the good non-mirrored logical volume uses a second disk for some of its logical partitions, arranged in a partially contiguous manner using the center and outer middle regions of the disks. This method of creating perflv3 shows the fine control available from the use of map files. For example, the file centre.map:


    #cat centre.map
    hdisk5:111-112
    hdisk3:44-45
    hdisk5:113-114
    hdisk3:42-43
    hdisk5:115-116
    hdisk3:40-41

    shows that a logical volume built with it will occupy the center region of hdisk3 and the center and outer middle regions of hdisk5. It also shows that we have decided to use two physical partitions from hdisk5, then two physical partitions from hdisk3, and so on. The allocation precision obtained from the use of a map file can be seen in the output of the following command:
    # lslv -m perflv3
    perflv3:/perfjfs3
    LP PP1 PV1 PP2 PV2 PP3 PV3
    0001 0111 hdisk5
    0002 0112 hdisk5
    0003 0044 hdisk3
    0004 0045 hdisk3
    0005 0113 hdisk5
    0006 0114 hdisk5
    0007 0042 hdisk3
    0008 0043 hdisk3
    0009 0115 hdisk5
    0010 0116 hdisk5
    0011 0040 hdisk3
    0012 0041 hdisk3

    If the same logical volume had been built on hdisk3 and hdisk5 using a maximum range for its Inter-Physical Volume Allocation Policy, then the pattern would be one physical partition on hdisk5, one on hdisk3, and so on. This would give better performance, so it's not surprising that for both single copy physical volumes, the copy time was about 28.6 seconds.

    The fact that the time is approximately the same also suggests that the center/middle disk region is not much faster than the edge region. However, this write had no competition from other disk requests so the disk heads will have minimal movement across the disk platter regions.

One final performance result of interest is that in our scenario, mirroring always degraded performance, even when it was tuned for optimal performance. This is not surprising since every logical write request is translated into two physical write operations. However, you may obtain different results in another environment, particularly if your test is based on a read rather than a write operation, which we did not investigate here.

Managing Backup and Restore

It is critically important that a systems administrator both implements and understands a reliable backup and recovery policy. This section shows you an example of how to use the volume group backup utilities to save and recover your system, if your data or configuration information is damaged beyond a practical repair timeframe.

In particular, the examples in this section will describe:


**** Suggestion - One image ****

You should usually place one volume group image on one tape when you use the smit defaults. This is what you may want to use as a simple backup rule. Usually, volume groups will be several gigabytes large if they contain two to three physical volumes, so each volume group will thus usually require at least one tape cartridge. Also note that the smit fields that you see when you execute smitty savevg:

However, you may be able to get around this by using the tctl command and the no-rewind tape device name as in the following sequence from the command line:


# savevg -i -f'/dev/rmt0.1' 'availvg'
# savevg -i -f'/dev/rmt0.1' 'perfvg'
# tctl -f/dev/rmt0 rewind
# restore -Tvf/dev/rmt0.1
# restore -Tvf/dev/rmt0.1

This seemed to work fine, but is not fully investigated in this example.

If you have a number of small machines attached to a server that has a large capacity tape drive, then you may decide to use disk space on the server as a temporary storage area for all your volume group images from the smaller machines until you back them up.


You must become familiar with the backup and recovery issues and procedures discussed in:

How to Use the savevg and restvg Commands

This example follows the steps in Backing Up Your System and Restoring a User Volume Group, in AIX Version 4.1 System Management Guide: Operating System and Devices, to create and then restore a backup tape image of perfvg. Since we can SHRINK the filesystems? when we restore the volume group with restvg, we'll create one backup with map files to try to preserve our design efforts in A Design Example for Improved Performance.

We can then use the same backup to rebuild perfvg a second time, but this time we'll try to shrink the journaled file systems. Note that if we shrink the journaled file systems, then the resulting extra free physical partitions in the volume group means that we can not maintain the physical partition map.

Command Line Summary

There is a simple sequence of commands that can be used by an experienced systems administrator to manage the backup and recovery of user volume groups. These simple commands come from a few smit menus which are described in the next section.

If you want to discover what smit is doing under the covers, press the F6=Command command key to see that:

You should note that:

Detailed Guidance

To create the backup image of the perfvg volume group:

  1. Execute smitty vg to get to the menu with the title Volume Groups.
  2. Select Back Up a Volume Group, or, from the smit menu:
    1. Execute smitty.
    2. Select System Storage Management (Physical & Logical Storage).
    3. Select Logical Volume Manager.
    4. Select Volume Groups.
    5. Select Back Up a Volume Group.
  3. If your tape device is different to the default /dev/rmt0, then type the correct target tape device (or file name) in the Backup DEVICE or FILE field, or use the F4=List key to select it.
  4. Type perfvg in the VOLUME GROUP to back up field, or use the F4=List key to select it.
  5. Use the Tab key to toggle the Create MAP files? field value from no to yes.

    This should ensure that we maintain the precise physical partition allocation documented in Map Files Usage and Contents, that we used to create perfvg. Your screen should look like:


                                 Back Up a Volume Group

    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [Entry Fields]
    WARNING: Execution of the savevg command will
    result in the loss of all material
    previously stored on the selected
    output medium.
    * Backup DEVICE or FILE [/dev/rmt0] +/
    * VOLUME GROUP to back up [perfvg] +
    Create MAP files? yes +
    EXCLUDE files? no +
    Number of BLOCKS to write in a single output [] #
    (Leave blank to use a system default)





    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

  6. Press the Enter=Do key to backup perfvg.
  7. Although your backup may require multiple tape volumes, our example fits on one tape and the output screen should resemble:
                                     COMMAND STATUS

    Command: OK stdout: yes stderr: no

    Before command completion, additional instructions may appear below.

    a ./tmp/vgdata/vgdata.files
    a ./tmp/vgdata/perfvg/filesystems
    a ./tmp/vgdata/perfvg/perfvg.data
    a ./tmp/vgdata/perfvg/perflog.map
    a ./tmp/vgdata/perfvg/perflv1.map
    a ./tmp/vgdata/perfvg/perflv2.map
    a ./tmp/vgdata/perfvg/perflv3.map
    a ./tmp/vgdata/perfvg/perflv4.map
    a ./tmp/vgdata/perfvg/perfpg.map
    a ./perfjfs4
    a ./perfjfs3
    a ./perfjfs2
    a ./perfjfs1
    0512-038 savevg: Backup Completed Successfully.


    F1=Help F2=Refresh F3=Cancel F6=Command
    F8=Image F9=Shell F10=Exit /=Find
    n=Find Next

    You must check the end of the output by using any appropriate key such as End, or Page Down, or Ctrl-V to look for the string 0512-038 mksysb: Backup Completed Successfully. This will ensure that there is no hidden error message, although such a message may be only a warning. In this case, our backup seems to be alright.

  8. Press the F3=Cancel key to return to the Volume Groups menu.
  9. You should check your backup by:
    1. Selecting List Files in a Volume Group Backup from the Volume Groups menu.
    2. If your tape device is different to the default /dev/rmt0, then type the correct target tape device (or file name) in the Backup DEVICE or FILE field, or use the F4=List key to select it.
    3. Press the Enter=Do key to check the backup of your volume group. If your backup is on multiple tape volumes, insert them as required.
  10. Press the F10=Exit key to return to the command prompt when your backup (and backup check) is complete.

Now that our backup is complete, we can try to rebuild the perfvg volume group with the same physical partition layout as at the time of the backup.

Before we restored the backup, we ran the following commands:


# lspv -M hdisk3 > /tmp/perfvg/hdisk3.map.before
# lspv -M hdisk5 > /tmp/perfvg/hdisk5.map.before

We can easily repeat similar commands after we've recreated perfvg so that we can check if there are any changes to the physical partition layout by using the diff command on the output files.

To recreate perfvg:

  1. You must ensure that all target disks are considered free for allocation by the operating system. In other words, you must see the word None next to the disk name when you execute the lspv command. Assume that you are recreating a volume group on the same disks that were used by the volume group when savevg was executed. If these disks are still being used by the volume group when you want to rebuild it, then you can:
    1. Unmount all journaled file systems. (You will have to change paging devices and reboot before this step if necessary.)
    2. Use the varyoffvg command on the volume group.
    3. Use the exportvg command for the volume group.
    4. Check via lspv.

      For this example, the procedure is:

      1. Execute chps -a'n' 'perfpg' and reboot.

        A smit menu for this deactivation of a paging logical volume, from the command smitty chps, is discussed in Manipulating Page Space.

      2. Execute umount /perfjfs1.
      3. Execute umount /perfjfs2.
      4. Execute umount /perfjfs3.
      5. Execute umount /perfjfs4.
      6. Vary off the perfvg volume group using varyoffvg perfvg.
      7. Export the perfvg volume group using exportvg perfvg.
      8. Check that the lspv output looks like:
        # lspv
        hdisk0 00014732b1bd7f57 rootvg
        hdisk1 0001221800072440 stripevg
        hdisk2 00012218da42ba76 rootvg
        hdisk4 0000020158496d72 availvg
        hdisk5 00000201dc8b0b32 None
        hdisk6 000002007bb618f5 availvg
        hdisk3 0002479088f5f347 None
        hdisk8 000137231982c0f2 stripevg
        hdisk7 none None

        hdisk5 and hdisk3 are not associated with any volume group by the operating system, and hence they can be used as the target physical volumes for the recreation of perfvg.

  2. Execute smitty restvg.

    Or, if you have come down from the main smit menu to the Volume Groups menu, then select Remake a Volume Group. Don't select Restore Files in a Volume Group Backup.

  3. If your tape device is different from the default /dev/rmt0, then type the correct source tape device (or file name) in the Restore DEVICE or FILE field, or use the F4=List key to select it so that your screen looks like:
                                 Remake a Volume Group

    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [Entry Fields]
    * Restore DEVICE or FILE [/dev/rmt0] +/
    SHRINK the filesystems? no +
    PHYSICAL VOLUME names [] +
    (Leave blank to use the PHYSICAL VOLUMES listed
    in the vgname.data file in the backup image)
    Number of BLOCKS to read in a single input [] #
    (Leave blank to use a system default)









    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

  4. Press the Enter=Do key to get to the following menu prompt:
                                     COMMAND STATUS

    Command: running stdout: yes stderr: yes

    Before command completion, additional instructions may appear below.


    Will create the Volume Group: perfvg
    Target Disks: hdisk3
    hdisk5
    Allocation Policy:
    Shrink Filesystems: no
    Preserve Physical Partitions for each Logical Volume: yes


    Enter "y" to continue:












  5. Type the character y and press Enter=Do

    Note that you are asked to confirm the target disks since they may contain data that you want to keep. Remember that we only exported perfvg to free up these disks, so hdisk3 and hdisk5 still contain valid data because exportvg does not write to the physical volumes that are in the exported volume group.

    If you did not check that your target disks are free, you may see an error such as:


                                     COMMAND STATUS

    Command: failed stdout: yes stderr: yes

    Before command completion, additional instructions may appear below.


    Will create the Volume Group: perfvg
    Target Disks: hdisk3
    hdisk5
    Allocation Policy:
    Shrink Filesystems: no
    Preserve Physical Partitions for each Logical Volume: yes


    Enter "y" to continue:
    0512-037 restvg: Target Disk hdisk3 Already belongs to a Volume Group. Restore
    of Volume Group canceled.




    F1=Help F2=Refresh F3=Cancel F6=Command
    F8=Image F9=Shell F10=Exit /=Find
    n=Find Next

    If you get this error even after you have exported the old volume group, you may have to first double check using the lsdev -Cc disk command that the physical volume names have not changed since you made the backup. You may also have to format the target disks.

    However, you should usually have no problem getting to a screen like


                                     COMMAND STATUS

    Command: OK stdout: yes stderr: yes

    Before command completion, additional instructions may appear below.

    [TOP]

    Will create the Volume Group: perfvg
    Target Disks: hdisk3
    hdisk5
    Allocation Policy:
    Shrink Filesystems: no
    Preserve Physical Partitions for each Logical Volume: yes


    Enter "y" to continue: New volume on /dev/rmt0:
    Cluster 51200 bytes (100 blocks).
    Volume number 1
    Date of backup: Tue Jul 12 19:14:13 1994
    [MORE...18]

    F1=Help F2=Refresh F3=Cancel F6=Command
    F8=Image F9=Shell F10=Exit /=Find
    n=Find Next

    You should go to the bottom of the smit output by using any appropriate key such as End, or Page Down, or Ctrl-V to confirm there is no hidden error message, although such a message may be only a warning.

  6. In our example, the command seems to have completed successfully so press the F10=Exit key to return to the command prompt so that we can confirm that:
Check the Restored Volume Group

The restvg command has automatically mounted our file systems and there are no data access problems. Although we expect our mirror setup to be maintained, the level of AIX Version 4 used in this example has resulted in only one physical partition being allocated to each logical partition in every logical volume in the perfvg volume group. This can be seen from the output of:


# lsvg -l perfvg
perfvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
perflv1 jfs 10 10 1 open/syncd /perfjfs1
perflv2 jfs 10 10 1 open/syncd /perfjfs2
perflv3 jfs 12 12 2 open/syncd /perfjfs3
perflv4 jfs 10 10 2 open/syncd /perfjfs4
perflog jfslog 1 1 1 open/syncd N/A
perfpg paging 5 5 2 closed/syncd N/A

However, it is only the secondary copies that have been lost. The primary copies have been restored to an identical physical partition layout. If you look at the contents of the file badmir.map in Map Files Usage and Contents, you can see that the layout specified by this map file is consistent with the output of the following command:


# lspv -M hdisk5 |grep perflv1
hdisk5:101 perflv1:1
hdisk5:102 perflv1:2
hdisk5:103 perflv1:3
hdisk5:104 perflv1:4
hdisk5:105 perflv1:5
hdisk5:106 perflv1:6
hdisk5:107 perflv1:7
hdisk5:108 perflv1:8
hdisk5:109 perflv1:9
hdisk5:110 perflv1:10
#

If you have many large logical volumes in the volume group that you've recreated, then it may not be easy to visually compare them. As an alternative, you can execute the following commands:


# lspv -M hdisk3 > hdisk3.map.after
# lspv -M hdisk5 > hdisk5.map.after
# diff hdisk5.map.after hdisk5.map.before|grep perflv3
# diff hdisk3.map.after hdisk3.map.before|grep perflv3
#

The diff command compares the ASCII text files that contain the physical volume physical partition allocation map both before and after perfvg was rebuilt. The grep command confirms that diff has found no difference for the perflv3 logical volume, so we know that its layout has been maintained.

How to Recover Space in a User Volume Group

Unlike the rootvg volume group journaled file systems, you may be able to recover space in a logical volume in another volume group without affecting other users. For example, you may be able to:

  1. Make a current backup of logical volume data.
  2. Close the logical volume (for example, by unmounting the associated &jfs).
  3. Remove the logical volume.
  4. Recreate the logical volume (and its associated journaled file system) with a smaller size.
  5. Restore the data.

However, if you have multiple logical volumes that you wish to recover data from in one volume group, then the above process may be lengthy. It's probably easier to backup the volume group using savevg, export it to deallocate its physical volumes, and then recreate it with restvg, as in the following procedure that is almost identical to the previous example.

  1. Execute smitty restvg. to get to the menu with the title Remake a Volume Group
  2. If your tape device is different to the default /dev/rmt0, then type the correct source tape device (or file name) in the Restore DEVICE or FILE field, or use the F4=List key to select it.
  3. Press the Tab key to toggle the field SHRINK the filesystems? from no to yes so that your screen should look like:
                                 Remake a Volume Group

    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [Entry Fields]
    * Restore DEVICE or FILE [/dev/rmt0] +/
    SHRINK the filesystems? yes +
    PHYSICAL VOLUME names [] +
    (Leave blank to use the PHYSICAL VOLUMES listed
    in the vgname.data file in the backup image)
    Number of BLOCKS to read in a single input [] #
    (Leave blank to use a system default)









    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

  4. Press the Enter=Do key to get to the following menu prompt:
                                     COMMAND STATUS

    Command: running stdout: yes stderr: yes

    Before command completion, additional instructions may appear below.


    Will create the Volume Group: perfvg
    Target Disks: hdisk3
    hdisk5
    Allocation Policy:
    Shrink Filesystems: yes
    Preserve Physical Partitions for each Logical Volume: no


    Enter "y" to continue:












  5. Type the character y and press Enter=Do.

    Note that you are asked to confirm the target disks since they may contain data that you want to keep. Remember that we only exported perfvg to free up these disks, so hdisk3 and hdisk5 still contain valid data because exportvg does not write to the physical volumes that are in the exported volume group.

    You should get to a screen like:


                                     COMMAND STATUS

    Command: OK stdout: yes stderr: yes

    Before command completion, additional instructions may appear below.

    [TOP]


    Will create the Volume Group: perfvg
    Target Disks: hdisk3
    hdisk5
    Allocation Policy:
    Shrink Filesystems: yes
    Preserve Physical Partitions for each Logical Volume: no


    Enter "y" to continue: New volume on /dev/rmt0:
    Cluster 51200 bytes (100 blocks).
    Volume number 1
    [MORE...18]

    F1=Help F2=Refresh F3=Cancel F6=Command
    F8=Image F9=Shell F10=Exit /=Find
    n=Find Next

    You should go to the bottom of the smit output by using any appropriate key such as End, or Page Down, or Ctrl-V to confirm there is no hidden error message, although such a message may be only a warning.

  6. In our example, the command seems to have completed successfully so press the F10=Exit key to return to the command prompt so that we can confirm that:

The output of the following commands shows us that the space saving operation has worked. The mirrored logical volumes have been correctly created, and the journaled file systems have been mounted so that we can confirm that our data files have been restored:


# lsvg -l perfvg
perfvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
perflv1 jfs 1 2 2 open/syncd /perfjfs1
perflv2 jfs 1 2 2 open/syncd /perfjfs2
perflv3 jfs 1 1 1 open/syncd /perfjfs3
perflv4 jfs 1 1 1 open/syncd /perfjfs4
perflog jfslog 1 1 1 open/syncd N/A
perfpg paging 5 10 2 closed/syncd N/A
#
# df -I /perf*
Filesystem 512-blocks Used Free %Used Mounted on
/dev/perflv1 2640 208 2432 7% /perfjfs1
/dev/perflv2 2640 208 2432 7% /perfjfs2
/dev/perflv3 3152 208 2944 6% /perfjfs3
/dev/perflv4 2640 208 2432 7% /perfjfs4
#

Note that although each journaled file system requires one 4MB logical partition, the journaled file system is actually much smaller than this, and can be expanded to over 8000 512 byte blocks before a second logical partition will be allocated to it.

How to Use the mksysb Command

This example follows the steps in "Backing Up Your System", in the AIX Version 4.1 Installation Guide, to create a backup bootable tape image of rootvg. We can do this twice:

Since we are concerned about the location of our logical volume copies in our mirrored rootvg example, we only need to do a backup that will create map files. In fact, creating map files provides more installation choices; we can change the field in the installation menu from its default of yes so that we do not have to use the map files on the tape to rebuild the rootvg.

Once the backup is complete, we can try to reinstall AIX Version 4 from our backup tape and confirm that the mirror copies are on separate physical volumes:

Note that the entry for the image.data file in the AIX Version 3.2 Files Reference reminds us that this file in the / directory should not be modified.

Command Line Summary

Unlike AIX Version 3, the command called by smit is actually a script that for a bootable tape effectively runs the following command:


# /usr/bin/mksysb -i $BFLAG $EFLAG $MFLAG $DEVICE

The script that is actually run is a great deal more complicated, and can be viewed by using the F6=Command key from the smit Back Up the System menu.

Detailed Guidance

Our first example shows how to create a backup that includes map files for the rootvg. This should enable us to use this backup tape to rebuild a system that has the same disk configuration, with the physical partitions of each logical volume located in exactly the same place. This will ensure that your system performance does not suffer because, for example, a paging logical volume could otherwise be rebuilt on a slow physical volume instead of the fast physical volume that it was on when the AIX Version 4 rootvg image was built.

Always Document the Current System

Before we start any backup, it is wise to collect the output of a few commands to document the current system configuration. This information may be very valuable if you encounter a problem when you use this backup image to install AIX Version 4.

You can record information such as:


# lspv
hdisk0 00014732b1bd7f57 rootvg
hdisk1 0001221800072440 stripevg
hdisk2 00012218da42ba76 rootvg
hdisk4 0000020158496d72 availvg
hdisk5 00000201dc8b0b32 perfvg
hdisk6 000002007bb618f5 availvg
hdisk3 0002479088f5f347 perfvg
hdisk8 000137231982c0f2 stripevg
hdisk7 none None
# lsdev -Cc disk
hdisk0 Available 00-08-00-0,0 670 MB SCSI Disk Drive
hdisk1 Available 00-08-00-1,0 670 MB SCSI Disk Drive
hdisk2 Available 00-08-00-2,0 355 MB SCSI Disk Drive
hdisk4 Available 00-07-00-0,0 1.2 GB SCSI Disk Drive (in 2.4 GB Disk Unit)
hdisk5 Available 00-07-00-1,0 1.2 GB SCSI Disk Drive (in 2.4 GB Disk Unit)
hdisk6 Available 00-07-00-2,0 1.2 GB SCSI Disk Drive (in 2.4 GB Disk Unit)
hdisk3 Available 00-08-00-3,0 320 MB SCSI Disk Drive
hdisk8 Available 00-07-00-4,0 857 MB SCSI Disk Drive
hdisk7 Available 00-07-00-3,0 1.2 GB SCSI Disk Drive (in 2.4 GB Disk Unit)
# df
Filesystem 512-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 8192 4480 45% 714 34% /
/dev/hd2 409592 32688 92% 5044 9% /usr
/dev/hd9var 24576 3952 83% 95 3% /var
/dev/hd3 24576 23008 6% 70 1% /tmp
/dev/hd1 8192 7680 6% 70 6% /home
/dev/availlv 49152 4888 90% 17 0% /availjfs
/dev/strlv16k 98304 9856 89% 18 0% /strjfs16k
/dev/strlv32k 65536 2640 95% 42 0% /strjfs32k
/dev/lv01 57344 616 98% 5726 8% /frag512
/dev/lv00 57344 0 100% 5100 7% /frag4096
/dev/lv02 16384 288 98% 1748 85% /frag512-1
/dev/perflv1 81920 79280 3% 16 0% /perfjfs1
/dev/perflv2 81920 79280 3% 16 0% /perfjfs2
/dev/perflv3 98304 95152 3% 16 0% /perfjfs3
/dev/perflv4 81920 79280 3% 16 0% /perfjfs4
#
# lsps -a
Page Space Physical Volume Volume Group Size %Used Active Auto Type
perfpg hdisk5 perfvg 20MB 0 no no lv
perfpg hdisk3 perfvg 20MB 0 no no lv
hd6 hdisk0 rootvg 32MB 24 yes yes lv
hd6 hdisk2 rootvg 32MB 24 yes yes lv
#

Note that since the creation of the mirror copies of the rootvg logical volumes, we deleted the paging00 device and increased the /tmp journaled file system by 4MB. An example of the output of lsvg -M rootvg before this change is shown in How to Check the Implementation of a Mirrored rootvg. This is very useful command output to keep when you want to create a rootvg image that includes map files.

Create the rootvg Image on a Bootable Tape

As well as the prerequisites listed in Backing Up Your System, in AIX Version 4.1 Installation Guide, you need to ensure that you have:

We can see from the above output of the df command that we have 23008 512 byte blocks free in the /tmp journaled file system. This is more than 8.2MB, so we should have enough working space for the backup process. However, even if df said that /tmp was almost full or on the borderline of being likely to run out of space, then since we know that rootvg has some free physical partitions in it, we can just change the smit field EXPAND /tmp if needed? as described below. We also know that all the rootvg journaled file systems that we want to backup are currently mounted. To create the rootvg image:

  1. Execute the command smitty mksysb.

    If you want to find this in the AIX Version 4 smit menu hierarchy:

    1. Execute the command smitty.
    2. Select System Storage Management (Physical & Logical Storage).
    3. Select System Backup Manager.
    4. Select Back Up the System.

    Your screen should now have a menu with the title Back Up the System.

  2. Type the name of our backup device, such as /dev/rmt0, in the Backup DEVICE or FILE field if its different to the default value that appears in the field.
  3. Ensure that we'll have enough working space by using the Tab key to toggle EXPAND /tmp if needed? to no.
  4. Use the Tab key to toggle Create MAP files? from no to yes so that your screen looks like:
                                   Back Up the System

    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [Entry Fields]
    WARNING: Execution of the mksysb command will
    result in the loss of all material
    previously stored on the selected
    output medium. This command backs
    up only rootvg volume group.

    * Backup DEVICE or FILE [/dev/rmt0] +/
    Make BOOTABLE backup? yes +
    (Applies only to tape media)
    EXPAND /tmp if needed? (Applies only to bootable yes +
    media)
    Create MAP files? yes +
    EXCLUDE files? no +
    Number of BLOCKS to write in a single output [] #
    #
    (Leave blank to use a system default)

    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

  5. Press the Enter=Do key to create the backup using the write enabled tape that we placed in the tape drive (we want to backup all files and we'll let the operating system determine the appropriate number of blocks to use for the /dev/rmt0 tape device).
  6. Insert a second tape if required.

    Our example backup image fits on one 2.3GB 8mm tape cartridge. If your rootvg image requires multiple large capacity tapes, then you may need to reconsider your volume group design.

  7. Wait for the backup to complete, which will be indicated by a screen that looks like:
                                     COMMAND STATUS

    Command: OK stdout: yes stderr: yes

    Before command completion, additional instructions may appear below.

    [TOP]
    File System size changed to 24576

    bosboot: Boot image is 5173 512 byte blocks.
    Backing up to /dev/rmt0.1
    Cluster 51200 bytes (100 blocks).
    Volume 1 on /dev/rmt0.1
    a 10 ./tapeblksz
    a 24 ./tmp/vgdata/rootvg/hd1.map
    a 1300 ./tmp/vgdata/rootvg/hd2.map
    a 72 ./tmp/vgdata/rootvg/hd3.map
    a 24 ./tmp/vgdata/rootvg/hd4.map
    a 12 ./tmp/vgdata/rootvg/hd5.map
    a 24 ./tmp/vgdata/rootvg/hd5x.map
    [MORE...6679]

    F1=Help F2=Refresh F3=Cancel F6=Command
    F8=Image F9=Shell F10=Exit /=Find
    n=Find Next

    As can be seen from this output, the map files that contain the physical partition allocation data are the first files that are backed up.

    You should go to the bottom of the smit output by using any appropriate key such as End, or Page Down, or Ctrl-V to confirm that the end of the output looks like:


    a ./perfjfs3
    a ./perfjfs4
    a ./availjfs
    a ./strjfs16k
    a ./strjfs32k
    a ./frag4096
    a ./bosinst.data
    0512-038 mksysb: Backup Completed Successfully.

    It is important to check that there is no hidden error message, although such a message may be only a warning. In this case, our backup seems to be alright.

  8. You may find that it is easier to read the first and last pages of this long output by checking your smit.log file when you exit smit by pressing the F10=Exit key.
    **** Warning - label all tapes ****

    You will eventually become very frustrated if you do not correctly label your backup tape(s) to include:

    • The date of backup.
    • The size of the rootvg image backed up (you may wish to keep a printed copy of the /image.data file so that you can estimate how much disk space is required on a target system installed with the SHRINK option set to yes in the installation menu).
    • The root password information.
    • A communications configuration summary.
    • The AIX version and release level.
    • The tape name to identify it accurately in your backup tape pool.

  9. Write protect and safely store your tape(s).
Check the rootvg Image

Briefly, it is important to note that you should check your backups regularly. A bootable tape should be checked by actually trying to boot the system in service mode using the tape. The actual files backed up should be checked by the smit option List Files in a System Image found in the System Backup Manager smit menu.

How to Do a Thorough Backup Tape Test

Use the tape to reinstall the source system!!!

To do this, you should refer to Installing BOS from a System Backup in the AIX Version 4.1 Installation Guide. This article describes in detail the steps required to recover your mksysb image, which is what we did for this example. You will improve your understanding of the installation process if you continue with the prompted installation.

In this example, you will encounter a problem if you try to use map files. As we discussed at the start of this chapter, the names of the physical volumes may be reconfigured if some of the disks were added into the system at different times. This resulted in the installation process thinking that hdisk5 and hdisk7 were the target disks for the rootvg. You can check the actual SCSI addresses of these disks to confirm that this is correct. However, because the map files only contain the names of the disks, then the installation process only recognized hdisk0 and hdisk3 (the original names of the rootvg disks) as having map files.

Hence, we are forced to install AIX Version 4 with the defaults of:

This problem with the physical volume names shows us something useful; a backup volume group image created with map files does not have to be installed with the Use Maps set to yes.

When the installation is complete, we can see the following disk configuration:


# lspv
hdisk0 0000020158496d72 None
hdisk1 00000201dc8b0b32 None
hdisk2 000002007bb618f5 None
hdisk3 none None
hdisk4 000137231982c0f2 None
hdisk5 00014732b1bd7f57 rootvg
hdisk6 0001221800072440 None
hdisk7 00012218da42ba76 rootvg
hdisk8 0002479088f5f347 None
# lsdev -Cc disk
hdisk0 Available 00-07-00-0,0 1.2 GB SCSI Disk Drive (in 2.4 GB Disk Unit)
hdisk1 Available 00-07-00-1,0 1.2 GB SCSI Disk Drive (in 2.4 GB Disk Unit)
hdisk2 Available 00-07-00-2,0 1.2 GB SCSI Disk Drive (in 2.4 GB Disk Unit)
hdisk3 Available 00-07-00-3,0 1.2 GB SCSI Disk Drive (in 2.4 GB Disk Unit)
hdisk4 Available 00-07-00-4,0 857 MB SCSI Disk Drive
hdisk5 Available 00-08-00-0,0 670 MB SCSI Disk Drive
hdisk6 Available 00-08-00-1,0 670 MB SCSI Disk Drive
hdisk7 Available 00-08-00-2,0 355 MB SCSI Disk Drive
hdisk8 Available 00-08-00-3,0 320 MB SCSI Disk Drive
#

The new assignment of disk names reflects the following two precedents:

  1. SCSI physical volumes connected to adapters that are in lower slot numbers are configured first.
  2. SCSI physical volumes with smaller addresses are configured first.
This means that the disk at address 00-08-00-0,0 which was hdisk0 is now called hdisk5. Likewise, hdisk2 became hdisk7. However, note that the disk physical volume identifiers have not changed. In other words, the number 00014732b1bd7f57 that was associated with the rootvg disk hdisk0 is still associated with the same physical volume, which is now called hdisk5. This is expected because when an identifier is given to a disk, it is actually recorded on the VGDA of the disk.

Some of a physical volume Volume Group Descriptor Area can be seen by the following low level command:


# lqueryvg -p hdisk0 -At
Max LVs: 256
PP Size: 22
Free PPs: 560
LV count: 2
PV count: 2
Total VGDAs: 3
Logical: 000004461ed9e52e.1 availlv 1
000004461ed9e52e.2 loglv00 1
Physical: 0000020158496d72 2 0
000002007bb618f5 1 0
#

This tells us that the disk that is currently called hdisk0 belongs to a volume group that contains one other physical volume. Notice that the volume group physical volumes are identified by their unique hexadecimal number rather than by a name such as hdisk0.

The above command will also help us fix another problem that exists since our installation of the backup tape. We can see from the earlier output of the lspv command that the other non-rootvg physical volumes have the word None next to them which reflects the fact that the current operating system does not know about our user volume groups. This means that we will have to reimport these volume groups. This is not difficult since we can use the output of the lspv command and the lqueryvg command for some of the disks to determine how many user volume groups we have.

Alternatively, we could use the following command for every disk to determine the volume group configuration. When you can see the logical volume names on your screen, you can press the Ctrl and C keys simultaneously to exit this command.


# /usr/bin/strings /dev/rhdisk0| more
XImr
_LVM
DEFECT
_LVM
DEFECT
XImr
availlv
loglv00

Of course, since we had saved the output of the commands:

in our original system, then we do not have to use lqueryvg or strings.
**** Warning - Name your logical volumes ****

This example clearly illustrates the value of using meaningful names for your logical volumes and volume groups. We recommended that you always use the option Add a Journaled File System on a Previously Defined Logical Volume in the menu found from smitty jfs, rather than Add a Journaled File System for long term data files.

It is easier for us to recognize something called availlv rather than lv00.


How to Import a Volume Group

To import the user volume groups:

  1. Execute smitty importvg to get to the menu with the title Import a Volume Group, or, more generally:
    1. Execute smitty.
    2. Select System Storage Management (Physical & Logical Storage).
    3. Select Logical Volume Manager.
    4. Select Volume Groups.
    5. Select Import a Volume Group.
  2. Type the name of the volume group, such as availvg, in the VOLUME GROUP name field.
  3. Type the name of one physical volume that you know belongs to this volume group, such as hdisk0 for availvg, in the PHYSICAL VOLUME name field, or use the F4=List key to select it.

    Note that only one physical volume is required to import a volume group. As we saw from the earlier output of the lqueryvg command, each disk in a volume group knows what physical volumes belong to the volume group, and what logical volumes exist on the volume group.

  4. Press the Enter=Do key when your screen looks like:
                                 Import a Volume Group

    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [Entry Fields]
    VOLUME GROUP name [availvg]
    * PHYSICAL VOLUME name [hdisk0] +
    * ACTIVATE volume group after it is yes +
    imported?
    Volume group MAJOR NUMBER [] +#












    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

  5. Press the F10=Exit key to return to the command prompt when smit returns an OK message.
    **** Warning - Import user volume groups ****

    With the level of AIX Version 4 that we used for these examples, we found that it is always necessary to re-import your user volume groups when a rootvg backup image is installed.


  6. Repeat this sequence for all volume groups whose data you want to access. In this example, repeat the command for the perfvg and stripevg volume groups.

    We executed importvg directly from the command line as follows:


    # importvg -y availvg hdisk0
    availvg
    # importvg -y perfvg hdisk1
    perfvg
    # importvg -y stripevg hdisk4
    stripevg

We can easily confirm that the volume group configuration has been restored by the following command:


# lspv
hdisk0 0000020158496d72 availvg
hdisk1 00000201dc8b0b32 perfvg
hdisk2 000002007bb618f5 availvg
hdisk3 none None
hdisk4 000137231982c0f2 stripevg
hdisk5 00014732b1bd7f57 rootvg
hdisk6 0001221800072440 stripevg
hdisk7 00012218da42ba76 rootvg
hdisk8 0002479088f5f347 perfvg
#

Note that all volume groups were automatically varied on when they were imported, as can be from the output of:


# lsvg -o
stripevg
perfvg
availvg
rootvg

Now that the volume group configuration has been restored, we can mount the journaled file systems in these volume groups. to check that we can still access our data by executing the command:


# mount all
mount: /dev/hd1 on /home: Device busy
mount: /dev/newlv on /newfs: No such file or directory
Replaying log for /dev/perflv1.
Replaying log for /dev/availlv.

We can quickly check the restoration of the files in our journaled file systems from the command:


# df
Filesystem 512-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 8192 4520 44% 713 34% /
/dev/hd2 409592 32712 92% 5044 9% /usr
/dev/hd9var 24576 21712 11% 76 1% /var
/dev/hd3 24576 22640 7% 59 1% /tmp
/dev/hd1 8192 7712 5% 51 4% /home
/dev/perflv1 81920 79280 3% 16 0% /perfjfs1
/dev/perflv2 81920 79280 3% 16 0% /perfjfs2
/dev/perflv3 98304 95152 3% 16 0% /perfjfs3
/dev/perflv4 81920 79280 3% 16 0% /perfjfs4
/dev/availlv 49152 4888 90% 17 0% /availjfs
/dev/strlv16k 98304 9856 89% 18 0% /strjfs16k
/dev/strlv32k 65536 2640 95% 42 0% /strjfs32k
/dev/lv01 57344 616 98% 5726 8% /frag512
/dev/lv00 57344 0 100% 5100 7% /frag4096
/dev/lv02 16384 288 98% 1748 85% /frag512-1

From a comparison with this command's output some time before the rootvg was built, we can see that all journaled file systems are identical except for the fact that the rootvg journaled file systems have more free space. This occurred because we deleted some files before we built the image.

Finally, we need to check whether our rootvg mirrored configuration will protect us from disk failure. The output of the following command suggests that we may be safe:


# lsvg -l rootvg
rootvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
hd6 paging 8 16 2 open/syncd N/A
hd5 boot 1 1 1 closed/syncd N/A
hd8 jfslog 1 2 2 open/syncd N/A
hd4 jfs 1 2 2 open/syncd /
hd2 jfs 50 100 2 open/syncd /usr
hd9var jfs 3 6 2 open/syncd /var
hd3 jfs 3 6 2 open/syncd /tmp
hd1 jfs 1 2 2 open/syncd /home
hd5x boot 2 2 1 closed/syncd N/A
#

Now the last time our example system booted, it used the hd5 logical volume on hdisk5, the 670MB disk at SCSI address 08-00. This can be seen from:


# bootinfo -b
hdisk5
#

We used the strings command on both the hd5 and hd5x raw devices and thought that we may be able to reboot from the hd5x logical volume after executing:


# bootlist -m normal hdisk7 hdisk5

However, our reboot hung at some point after the message PERFORM auto-varyon of Volume Groups was displayed. This suggests that it is either a problem with what we thought was a valid boot image on hd5x, and/or, that there is a problem with the new physical partition map that was creation by the AIX Version 4 installation program.

When we checked the new rootvg physical partition map, we found that the hd5x logical volume had been created on hdisk5 instead of hdisk7. Hence we used the following commands:


# migratepv -l hd5x hdisk5 hdisk7
# bosboot -a -l /dev/hd5x -d /dev/hdisk7

bosboot: Boot image is 4275 512 byte blocks.
# bosboot -a -l /dev/hd5 -d /dev/hdisk5

bosboot: Boot image is 4275 512 byte blocks.

For more examples of how to use the migratepv command, please refer to How to Use the migratepv Command.

We are now able to successfully boot using either hdisk5 or hdisk7, so our first example installation of a rootvg image is complete.


**** Warning - Always document rootvg ****

This boot problem reminds us that it is very important to use some commands, such as those discussed in How to Document the Volume Group Design, to record the physical partition map of a mirrored rootvg configuration.


So far, we've tried to install a rootvg backup image that uses map files (and hence does not change a journaled file systems' size). However, in our example, we were forced to abandon the use of map files because the disk names changed. If you do use map files, you should execute the command lsvg -M rootvg > filename on both the source and target machines, and then use the diff command on the output files to confirm that the physical partition maps are identical.

How to Save Space in the rootvg

Now we can try to use the new SHRINK option in the installation menus of AIX Version 4 to save space in the rootvg volume group, if some of its logical volumes have unused space. For example, if you de-install a large program product, you may end up with a lot of free space in /usr that you would rather allocate to another logical volume in the rootvg volume group.

Briefly, recall that our rootvg from the end of How to Use the mksysb Command looks like:


# lsvg -l rootvg
rootvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
hd6 paging 8 16 2 open/syncd N/A
hd5 boot 1 1 1 closed/syncd N/A
hd8 jfslog 1 2 2 open/syncd N/A
hd4 jfs 1 2 2 open/syncd /
hd2 jfs 50 100 2 open/syncd /usr
hd9var jfs 3 6 2 open/syncd /var
hd3 jfs 3 6 2 open/syncd /tmp
hd1 jfs 1 2 2 open/syncd /home
hd5x boot 2 2 1 closed/syncd N/A
#

The disk space recovery installation procedure is almost identical to that documented in the previous example in How to Use the mksysb Command. The only difference is that you must install AIX Version 4 with the defaults of:


**** Warning - Do not edit /image.data ****

Although the article Backing Up the System Image Including User Volume Groups in AIX Version 4.1 System Management Guide: Operating System and Devices, suggests that we can change the SHRINK variable in the image.data file, the entry for this file in the AIX Version 3.2 Files Reference reminds us that it is not wise to edit this file. Although it says that it is alright to edit the SHRINK field, we suggest that you do not do this, since you can get the same effect by changing the SHRINK field in the AIX Version 4 installation menu.


When the installation is complete, run the commands:


#  df -kI
Filesystem 1024-blocks Used Free %Used Mounted on
/dev/hd4 4096 1928 2168 47% /
/dev/hd2 192512 188184 4328 97% /usr
/dev/hd9var 4096 1084 3012 26% /var
/dev/hd3 12288 1736 10552 14% /tmp
/dev/hd1 4096 240 3856 5% /home
# lsvg -l rootvg
rootvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
hd6 paging 8 16 2 open/syncd N/A
hd5 boot 1 1 1 closed/syncd N/A
hd8 jfslog 1 2 2 open/syncd N/A
hd4 jfs 1 2 2 open/syncd /
hd2 jfs 47 94 2 open/syncd /usr
hd9var jfs 1 2 2 open/syncd /var
hd3 jfs 3 6 2 open/syncd /tmp
hd1 jfs 1 2 2 open/syncd /home
hd5x boot 2 2 1 closed/syncd N/A
#

The df command also shows that we now have a minimal amount of free space in each journaled file system. You can see that we've recovered three logical partitions from hd2 (/usr) and two from hd9var (/var), so we have 10 more free physical partitions in rootvg that can be used to increase or create other logical volumes. This occurs because we recover two physical partitions for every logical partition since we have implemented a double copy mirrored rootvg volume group.

Finally, do not forget to check the location of the hd5x logical volume. You may have to move it using migratepv as was shown at the end of the last mksysb example. Also, you will need to repeat the bosboot commands for both hd5 and hd5x before you can simulate a rootvg physical volume failure.

Utilizing the New AIX Version 4 Features

This section contains some practical examples in the usage of the new storage related features in AIX Version 4. This includes:

  1. Striped logical volumes
  2. Fragments
  3. JFS compression
  4. File systems greater than 2GB in size

First, create stripevg that contains hdisk1 and hdisk8 using the same procedure as for the creation of availvg:

  1. Execute smitty vg..
  2. Select Add a Volume Group.
  3. Type stripevg for VOLUME GROUP name.
  4. Type hdisk1 hdisk8 for PHYSICAL VOLUME names.
  5. Press Enter=Do and then F10=Exit when smit returns an OK prompt.

Striped Logical Volumes

There are performance benefits in using striped logical volumes, particularly when sequential read/write access to large files is of importance. Although it is beyond the scope of this book to provide a practical example which would demonstrate this, results taken from a benchmark using striping are provided later in this section.

The purpose of the example to follow is to show how a striped logical volume can be created in AIX Version 4.

Command Line Summary

  1. Create a striped logical volume, consisting of 60 logical partitions, called stripelv32k, in the volume group stripevg using disks hdisk1 and hdisk8. Specify a stripe size of 32k:
    # mklv -y'stripelv32k' -S'32K' 'stripevg' '60' 'hdisk1 hdisk8'

  2. Create a file system called strjfs32k using logical volume strlv32k:
    # crfs -v jfs -d'strlv32k' -m'/strjfs32k' -A'yes' -p'rw' -t'no' \
    -a frag='4096' -a nbpi='4096' -a compress='no'

  3. Mount the file system:
    # mount /strjfs32k

Detailed Description

The above summary steps show how a striped logical volume can be created and subsequently used to create a journaled file system. In the following section we will use smit to create the same striped logical volume and file system, and will review the steps necessary to identify the resources required.

In the example we will not discuss how to tune a striped logical volume for optimal performance. There are many different factors which need to be considered when tuning a striped logical volume for optimal performance. Some of these include system-wide operating system parameters, real memory requirements of applications, and the availability of hardware resources.

Changing a system-wide operating system parameter such as maxpgahead (maximum number of pages to read ahead), to provide a high performance striped logical volume for one application can sometimes cause degradation in performance for another application running on the same system. Also, if striping is done across two disks attached to a single SCSI adapter this would not provide a performance increase over non-striped disks.

Therefore, a lot of research and preparation work needs to be carried out in order to provide an optimal performance environment suitable to all applications. Since the needs for each site will differ, a particular system configuration providing high performance sequential access to files stored in a striped logical volume will not necessarily provide the same performance benefits at another site.

However, some basic principles should be followed when creating striped logical volumes for high-performance. These are:

How to Create a Striped Logical Volume

For this example we have chosen to use an existing volume group, stripevg which consists of two physical volumes, hdisk1 and hdisk8. The logical volume and filesystem we will create will be called /strlv32k and /strjfs32k, respectively. The logical volume will consist of 60 logical partitions.

  1. Use the lsdev command to make sure that the physical volumes hdisk1 and hdisk8 in the volume group stripevg are attached to different SCSI adapters:
    # lsdev -Cc disk
    hdisk1 Available 00-08-00-1,0 670 MB SCSI Disk Drive
    hdisk8 Available 00-07-00-4,0 857 MB SCSI Disk Drive

    We can see from the above output that hdisk1 is attached to the SCSI adapter located in slot 08 and hdisk8 is attached to the SCSI adapter in slot 07.

  2. Check that the physical volumes used in the volume group stripevg have no physical partitions allocated:
    #lsvg -M stripevg

    stripevg
    hdisk1:1-159
    hdisk8:1-203

    The above output shows that no logical volumes currently exist and all partitions on each physical volume are free.

  3. Create a striped logical volume over these physical volumes using the command smitty mklv:
    1. On the first screen enter stripevg for the volume group name and press Enter.
    2. On the second screen, shown below, enter:
      • strlv32k for the field Logical volume NAME.
      • 60 for the field Number of LOGICAL PARTITIONS.
      • hdisk1 hdisk8 for the field PHYSICAL VOLUME names.

                               Add a Logical Volume

      Type or select values in entry fields.
      Press Enter AFTER making all desired changes.

      [TOP] [Entry Fields]
      Logical volume NAME [strlv32k]
      * VOLUME GROUP name stripevg
      * Number of LOGICAL PARTITIONS [60] #
      PHYSICAL VOLUME names [hdisk1 hdisk8] +
      Logical volume TYPE []
      POSITION on physical volume outer_middle +
      RANGE of physical volumes minimum +
      MAXIMUM NUMBER of PHYSICAL VOLUMES [] #
      to use for allocation
      Number of COPIES of each logical 1 +
      partition
      Mirror Write Consistency? yes +
      Allocate each logical partition copy yes +
      on a SEPARATE physical volume?
      [MORE...9]

      F1=Help F2=Refresh F3=Cancel F4=List
      F5=Reset F6=Command F7=Edit F8=Image

    3. Press the PageDown key to move to the next page of this smit screen, shown below:
                                Add a Logical Volume

      Type or select values in entry fields.
      Press Enter AFTER making all desired changes.

      [MORE...9] [Entry Fields]

      Number of COPIES of each logical 1 +
      partition
      Mirror Write Consistency? yes +
      Allocate each logical partition copy yes +
      on a SEPARATE physical volume?
      RELOCATE the logical volume during reorganization? yes +
      Logical volume LABEL []
      MAXIMUM NUMBER of LOGICAL PARTITIONS [128]
      Enable BAD BLOCK relocation? yes +
      SCHEDULING POLICY for writing logical parallel +
      partition copies
      Enable WRITE VERIFY? no +
      File containing ALLOCATION MAP []
      Stripe Size? [32K] +
      [BOTTOM]

      F1=Help F2=Refresh F3=Cancel F4=List
      F5=Reset F6=Command F7=Edit F8=Image
      F9=Shell F10=Exit Enter=Do

    4. Using the Tab key toggle to the value 32K for the field Stripe Size?
    5. Press Enter.
    6. Press F10 to exit when smit returns with OK.
  4. Create a journaled files system using the logical volume strlv32k with the command smitty crjfslv.

    The following smit screen will appear:


      Add a Journaled File System on a Previously Defined Logical Volume



    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [Entry Fields]
    * LOGICAL VOLUME name stripelv +

    * MOUNT POINT [/stripefs]
    Mount AUTOMATICALLY at system restart? yes +
    PERMISSIONS read/write +
    Mount OPTIONS [] +
    Start Disk Accounting? no +
    Fragment Size (bytes) 4096 +
    Number of bytes per inode 4096 +
    Compression algorithm no +



    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

    On this smit screen:

    1. Press F4 and select strlv32k from the list.
    2. Enter /strjfs32k for the field MOUNT POINT.
    3. Using the Tab toggle to the value yes for the field Mount AUTOMATICALLY at system restart?
    4. Press Enter.
    5. Press F10 to exit when smit returns with OK.

    We have now successfully created a striped logical volume and journaled file system.

Benchmark Results for an I/O Bound Test Using Striping

Based on a particular Motorola benchmark, an intensive I/O bound application was developed in FORTRAN. The application test included performing continuous sequential access to about 2.4GB of data held within a journaled file system. The I/O activity performed by the application included reading forward, reading backward, and reading then writing forward. For comparison, the test was conducted, on separate occasions, using a striped logical volume and a non-striped logical volume.

The striped logical volume was created with a stripe size of 32K and the block size used for the disk I/O was 98304 bytes (96K). Certain operating system parameters were tuned to achieve better performance using the vmtume command. For example maxpgahead was set to 256 to allow up to this many 4K pages to be read ahead sequentially.

The hardware environment used for the test consisted of an RISC System/6000 model 590 with 512MB of memory. Three corvette adapters and six 2GB disks were configured for the striping test.

The results of this test was as follows:


**** Results ****
         Non-striped Run                        Striped Run
   User     System     Elapsed          User     System     Elapsed
   (hrs)    (hrs)      (hrs)            (hrs)    (hrs)      (hrs)

   10.01    2.05       25.18            10.05    2.21       14.50

From the above timing results we can see that the test conducted using a striped logical volume was completed in 14.50 hours, whereas the same test within a non-striped logical volume took 25.18 hours. With the user and system times similar for both tests we can conclude that striping provides much better performance.

How to Use Fragments for Disk Usage Efficiency

The purpose of this example is to show how file systems created with a small fragment size can provide better disk space utilization than file systems created with a large fragment size when used to store a large number of small files. The example will demonstrate how to set up file systems with different fragment sizes and number-of-bytes-per-inode (NBPI) values.

For this exercise we will create two file systems, one with a fragment size of 512 bytes and the other with the default fragment size of 4096 bytes. For both file systems we will use a value of 512 for NBPI so that more than the default number of inodes are created. Each file system will be allocated 50000 512 byte blocks.

The test for efficient use of disk space will be determined by the disk space used when a number of equal sized files are stored within each file system. In each of these file systems we will store several small files, each 512 bytes in size.

Command Line Summary

  1. First create a journaled file system called /frag512 with a 512 byte fragment size and a 512 NBPI value in the existing volume group stripevg:
    crfs -v jfs -g'stripevg' -a size='50000' -m'/frag512' -A'yes' -p'rw' \
    -t'no' -a frag='512' -a nbpi='512' -a compress='no'

  2. Next create a journaled file system with a 4096 byte fragment size and a 512 NBPI value in the existing volume group stripevg:
    crfs -v jfs -g'stripevg' -a size='50000' -m'/frag4096' -A'yes' \
    -p'rw' -t'no' -a frag='4096' -a nbpi='512' -a compress='no'

  3. Mount each of the above file systems:
    # mount /frag512
    # mount /frag4096

Detailed Description

The above two summary steps show how a file system can be created from the command line. In the following section we will look at each command separately, and also conduct example tests to verify the efficiency of using file systems with a small fragment size.

How to Create a File System with a Different Fragment Size

In our example we have chosen to use an existing volume group, stripevg which consists of physical volumes hdisk1 and hdisk8. The file system created with the 512 byte fragment size will be called /frag512 and that created with a fragment size of 4096 bytes will be called /frag4096.

  1. Create the 512 byte fragment size file system using the command smitty crjfs.

    Select the volume group stripevg from the list by moving to it using the down cursor key and pressing Enter.

    On the second smit screen, shown below, enter or change details for the following fields:


     
    Add a Journaled File System

    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [Entry Fields]
    Volume group name stripevg
    * SIZE of file system (in 512-byte blocks) [50000]
    * MOUNT POINT [/frag512]
    Mount AUTOMATICALLY at system restart? yes +
    PERMISSIONS read/write +
    Mount OPTIONS [] +
    Start Disk Accounting? no +
    Fragment Size (bytes) 512 +
    Number of bytes per inode 512 +
    Compression algorithm no +



    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

    1. Enter 50000 for the file system size.
    2. Enter /frag512 for the mount point.
    3. Using the Tab key rotate to yes for the field Mount AUTOMATICALLY at system restart?
    4. Enter 512 for the fragment size.
    5. Enter 512 for number of bytes per inode.
    6. Press Enter when all fields have been filled out.
    7. When processing finishes smit returns with OK, as shown below:

     
    COMMAND STATUS

    Command: OK stdout: yes stderr: no

    Before command completion, additional instructions may appear below.

    Based on the parameters chosen, the new /frag512 JFS file system
    is limited to a maximum size of 16777216 (512 byte blocks)

    New File System size is 57344





    F1=Help F2=Refresh F3=Cancel F6=Command
    F8=Image F9=Shell F10=Exit /=Find
    n=Find Next

    Press F10 to exit smit.


    **** Note **** The output on this screen shows that the new file system size is 57344 512 bytes instead of 50000. This is because the file system is rounded up to the nearest allocation group size. See Journaled File System for a description of file system structure.
  2. Now repeat the above steps to create the file system called /frag4096. However, this time use 4096 bytes for the file system fragment size instead of 512 bytes.
  3. With both file systems now created we need to mount each in turn using the commands:

# mount /frag512
# mount /frag4096

Look at the output for the two mounted file systems produced by the df command:


# df -I /frag512 /frag4096
Filesystem 512-blocks Used Free %Used Mounted on
/dev/lv01 57344 16528 40816 28% /frag512
/dev/lv00 57344 16504 40840 28% /frag4096

The above output shows us that we have 40816 512 byte blocks available in the file system /frag512 and 40840 512 byte blocks available in the file system /frag4096.

How to Test the Efficiency of Disk Space Utilization

Now that we have created two file systems which have almost identical characteristics apart from their fragment sizes, we can look at testing their efficiency for storing a large number of very small files. For the test we will use a file whose size is 512 bytes which will occupy only one 512 byte fragment.

Use the following shell script, mkfile to create the file 512bytefile with a size of 512 bytes.


#!/bin/ksh
# mkfile filesize
usage()
{
clear
echo " "
echo " "
echo " "
echo " "
echo "Usage: mkfile filesize"
echo " filesize should be in multiples of 512 bytes"
echo " "
echo " "
echo " "
echo " "
exit
}
# Main...
if [ $# != 1 ]
then
usage
fi
filesize=$1
filename="$1"bytefile
integer mod=`expr $filesize % 512`
integer div=`expr $filesize / 512`
if [ $mod != 0 ]
then
usage
fi
integer i=0;
integer j=`expr $div \* 128`
> $filename
echo " "
echo "Creating file \"$filename\". Please wait..."
while true
do
echo "yes" >> $filename
i=i+1
if [ $i = $j ]
then
break
fi
done

Create the file using the command:


# cd /var/tmp
# mkfile 512

To test the number of 512 byte files that can be stored in each file system we will use the following sample Korn shell script called fragcopy. This shell script will continue to make copies in the target file system until either the file system become full or the target file count is reached. During processing a count will be displayed showing the number of files that have been copied successfully. Note that the first file has a count suffix of 0.


#!/bin/ksh
# fragcopy
usage()
{
clear
echo " "
echo " "
echo " "
echo " "
echo "Usage: fragcopy numfiles dir/sourcefilename dir/targetfilename"
echo " "
echo " "
echo " "
echo " "
exit
}
# Main...
integer i=0
integer cnt=$1
source=$2
target=$3
if [ $# != 3 ]
then
usage
fi
while true
do
cp $source $target.$i
if [ $? != 0 ]
then
echo " "
exit
fi
i=i+1
echo " Files copied: \c"
echo "$i\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\c"
if [ $i = $cnt ]
then
echo " "
break
fi
done

Create copies of the file 512bytefile in the file system /frag512 using the command:


# fragcopy 8 /var/tmp/512bytefile /frag512/frag8

Now let us look at the contents of the directory /frag512:


# ls -lt /frag512
total 8
-rw-r--r-- 1 root sys 512 Jul 11 18:02 frag8.0
-rw-r--r-- 1 root sys 512 Jul 11 18:02 frag8.1
-rw-r--r-- 1 root sys 512 Jul 11 18:02 frag8.2
-rw-r--r-- 1 root sys 512 Jul 11 18:02 frag8.3
-rw-r--r-- 1 root sys 512 Jul 11 18:02 frag8.4
-rw-r--r-- 1 root sys 512 Jul 11 18:02 frag8.5
-rw-r--r-- 1 root sys 512 Jul 11 18:02 frag8.6
-rw-r--r-- 1 root sys 512 Jul 11 18:02 frag8.7

Before we look at the df output for /frag512 let us create eight occurrences of the file 512bytefile in the filesystem /frag4096.

Create copies of the file 512bytefile in /frag4096 using the command:


# fragcopy 8 /var/tmp/512bytefile /frag4096/frag8

Look at the directory contents for /frag4096:


# ls -lt /frag4096
total 8
-rw-r--r-- 1 root sys 512 Jul 11 18:02 frag8.0
-rw-r--r-- 1 root sys 512 Jul 11 18:02 frag8.1
-rw-r--r-- 1 root sys 512 Jul 11 18:02 frag8.2
-rw-r--r-- 1 root sys 512 Jul 11 18:02 frag8.3
-rw-r--r-- 1 root sys 512 Jul 11 18:02 frag8.4
-rw-r--r-- 1 root sys 512 Jul 11 18:02 frag8.5
-rw-r--r-- 1 root sys 512 Jul 11 18:02 frag8.6
-rw-r--r-- 1 root sys 512 Jul 11 18:02 frag8.7

Now that we have the two file systems with the same number of files let us see how much disk space has been utilized in each using the df command:


# df -I /frag512 /frag4096
Filesystem 512-blocks Used Free %Used Mounted on
/dev/lv01 57344 16536 40808 28% /frag512
/dev/lv00 57344 16568 40776 28% /frag4096

We expected the file system /frag512 to use one 4K block and /frag4096 to use eight 4K blocks. We can verify this by comparing the results of df before and after the file copy operation.

Looking at the change in the Used column of the df output the following calculation shows how many 512 byte blocks have been used by each file system:

blks used by file copy = blks used before - blks used after

Based on this calculation the number of 512 byte blocks used by /frag512 is:

blks used by /frag512 = 16536 - 16528 = 8

This is correct since we created eight 512 byte files in this file system.

The following calculation shows how many blocks were used by the file system /frag4096.

blks used by /frag4096 = 16568 - 16504 = 64

This is also exactly as we expected since each file copied to /frag4096 is allocated a 4K block. With eight files this has resulted in 32K bytes used, which expressed in 512 byte blocks is 64.

We can therefore conclude, based on the results of the test, that the use of smaller fragment sizes leads to more efficient use of disk space when a large number of small files need to be stored. We have also observed that file systems using a large fragment size can cause much wasted space particularly when the files being stored are smaller than 4096 bytes.

How to Use JFS Compression and Check its Consequences

This example shows you how to create a compressed journaled file system, and then how to use some simple commands to investigate the effects of compression on both AIX Version 4 performance and disk space usage.

To investigate compression, let's use the availvg volume group. The example was done after the migratepv example discussed in How to Use the migratepv Command. This means that we have three physical volumes available, each with 287 4MB physical partitions, of which only 14 physical partitions are currently used.

Finally, remember that compression can only be specified when a journaled file system is created, and that compression must use a journaled file system with fragments that are less than 4096 bytes; in other words either 512 or 1024 or 2048 bytes. This example also investigates the differences between choosing 512 or 2048 as a fragment size when you create a journaled file system.

Command Summary

The following commands show you how to create and mount two compressed journaled file system, one with a fragment size of 512, the other with a fragment size of 2048. The other crfs command shows you how to create, for comparison, a third journaled file system that also has a fragment size of 2048, but does not use compression.


# crfs -v jfs -g'availvg' -a size='80000' -m'/compress' -A'yes' \
# -p'rw' -t'no' -a frag='2048' -a nbpi='4096' -a compress='LZ'
# ... output follows... then execute
# crfs -v jfs -g'availvg' -a size='80000' -m'/compress512' -A'yes' \
# -p'rw' -t'no' -a frag='512' -a nbpi='4096' -a compress='LZ'
# ... output follows... then execute
# crfs -v jfs -g'availvg' -a size='80000' -m'/uncompress' -A'yes' \
# -p'rw' -t'no' -a frag='2048' -a nbpi='4096' -a compress='no'

To check that the journaled file systems have been correctly created, use:


lsfs -q

To mount the journaled file systems, execute:


# mount /compress
# mount /compress512
# mount /uncompress

Next we can check the performance of the compressed file system by copying a 20MB file (bigfile) to each file system, and measuring the performance. First check that the logical volume configuration is similar:


# lspv -p hdisk0

Now record copy times:


# timex cp /strjfs16k/bigfile /compress
# timex cp /strjfs16k/bigfile /uncompress
# timex cp /strjfs16k/bigfile /compress512

Finally, check that all files are the same size:


# ls -lt /compress /uncompress /strjfs16k

Lastly, we can check the disk utilization in order to investigate the efficiency of the compression. Copy the following 2560 byte files:


# cp /strjfs32k/fragdata/2560bytefile /compress512/2560bytefile
# cp /strjfs32k/fragdata/2560bytefile /compress/2560bytefile
# cp /strjfs32k/fragdata/2560bytefile /uncompress/2560bytefile
# cp /strjfs32k/fragdata/2560bytefile /frag512/2560bytefile

Use du to check much space is really used:


# du /strjfs32k/fragdata/2560bytefile
# du /compress512/2560bytefile
# du /compress/2560bytefile
# du /uncompress/2560bytefile
# du /frag512/2560bytefile

Use ls to verify the normal size of each file:


# ls -l /strjfs32k/fragdata/2560bytefile /frag512/2560bytefile
# ls -l /compress512/2560bytefile /compress/2560bytefile
# ls -l /uncompress/2560bytefile

Detailed Guidance

How to Create a Compressed JFS

Since the availvg volume group has plenty of free space and one totally empty disk after the migration in How to Use the migratepv Command, then in this case we can create the journaled file system straight away, without first creating a target logical volume for the journaled file system. As you can see later in this section, the logical volume manager in this case uses a physical partition map for the journaled file systems in this example that does not have a significant effect on our performance results, which are discussed in How to Check the Performance of a Compressed File System.

Although we want to create three journaled file systems for this example, the method is almost identical with only a few fields that are different. Hence smit menus are only provided once.

To create a 40MB compressed journaled file system with a fragment size of 2048 bytes mounted at /compress:

  1. Execute the fastpath smitty crjfs to get to the following volume group menu selection:
     









    ____________________________________________________________________________
    | |
    | Volume Group Name |
    | |
    | Move cursor to desired item and press Enter. |
    | |
    | availvg |
    | rootvg |
    | perfvg |
    | stripevg |
    | |
    | F1=Help F2=Refresh F3=Cancel |
    | F8=Image F10=Exit Enter=Do |
    | /=Find n=Find Next |
    |__________________________________________________________________________|

    While availvg is highlighted, press the Enter=Do key to get to the menu with the title Add a Journaled File System.

    Alternatively, you can go through the smit hierarchy by:

    1. Executing smitty.
    2. Selecting System Storage Management (Physical & Logical Storage).
    3. Selecting File Systems.
    4. Selecting Add / Change / Show / Delete File Systems.
    5. Selecting Journaled File Systems.
    6. Selecting Add a Journaled File System.
    7. Selecting the availvg volume group. to get to the menu with the title Add a Journaled File System.
  2. Type 80000 for the field SIZE of file system (in 512-byte blocks). 80 000 times 512 bytes is roughly 40MB.
  3. Type /compress for the field MOUNT POINT.
  4. Use the Tab key to toggle the field Mount AUTOMATICALLY at system restart? from no to yes.
  5. Use the Tab key to toggle the field Fragment Size (bytes) from 4096 to 2048, or use the F4=List key to select it.
  6. Use the Tab key to toggle the field Compression algorithm from no to LZ so that your screen looks like:
                              Add a Journaled File System

    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [Entry Fields]
    Volume group name availvg
    * SIZE of file system (in 512-byte blocks) [80000] #
    * MOUNT POINT [/compress]
    Mount AUTOMATICALLY at system restart? yes +
    PERMISSIONS read/write +
    Mount OPTIONS [] +
    Start Disk Accounting? no +
    Fragment Size (bytes) 2048 +
    Number of bytes per inode 4096 +
    Compression algorithm LZ +







    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

  7. Leave the other fields with their default values and press the Enter=Do key to create the /compress journaled file system.

    When the journaled file system has been created, your screen should look like:


                                     COMMAND STATUS

    Command: OK stdout: yes stderr: no

    Before command completion, additional instructions may appear below.
    Based on the parameters chosen, the new /compress JFS file system
    is limited to a maximum size of 134217728 (512 byte blocks)
    New File System size is 81920












    F1=Help F2=Refresh F3=Cancel F6=Command
    F8=Image F9=Shell F10=Exit /=Find Next

  8. Press the key F10=Exit to return to the command prompt.

Now repeat the above procedure to:

  1. Create a 40MB compressed journaled file system with a frag size of 512 bytes mounted at /compress512:
    1. Execute the fastpath smitty crjfs.
    2. With availvg highlighted, press the Enter=Do key to get to the menu with the title Add a Journaled File System.
    3. Type 80000 for the field SIZE of file system (in 512-byte blocks).
    4. Type /compress512 for the field MOUNT POINT.
    5. Use the Tab key to toggle the field Mount AUTOMATICALLY at system restart? from no to yes.
    6. Use the Tab key to toggle the field Fragment Size (bytes) from 4096 to 512, or use the F4=List key to select it.
    7. Use the Tab key to toggle the field Compression algorithm from no to LZ.
    8. Leave the other fields with their default values and press the Enter=Do key to create the /compress512 journaled file system.
    9. Press the key F10=Exit to return to the command prompt.
  2. Create a 40MB non-compressed journaled file system with a fragment size of 2048 bytes mounted at /uncompress:
    1. Execute the fastpath smitty crjfs.
    2. With availvg highlighted, press the Enter=Do key to get to the menu with the title Add a Journaled File System.
    3. Type 80000 for the field SIZE of file system (in 512-byte blocks).
    4. Type /uncompress for the field MOUNT POINT.
    5. Use the Tab key to toggle the field Mount AUTOMATICALLY at system restart? from no to yes.
    6. Use the Tab key to toggle the field Fragment Size (bytes) from 4096 to 2048, or use the F4=List key to select it.
    7. Do not change the field Compression algorithm; leave it with the default setting of no.
    8. Leave the other fields with their default values and press the Enter=Do key to create the /uncompress journaled file system.
    9. Press the key F10=Exit to return to the command prompt.
How to Check the Characteristics of the New JFS

We could use smit to check the characteristics of each individual journaled file system by using the fastpath smitty chjfs and selecting, for example, /compress. To view a summary for all the journaled file systems, we could also:

  1. Execute smitty fs to get to the File Systems menu.
  2. Select List All File Systems to execute the command lsfs.

However, the flag -q now also tells us about the new AIX Version 4 journaled file system attributes, so the best way to check our new journaled file systems is to execute:


# lsfs -q /compress* /uncomp* /frag512
Name Nodename Mount Pt VFS Size Options Auto Ac

/dev/lv05 -- /compress jfs 81920 rw yes no
(lv size: 81920, fs size: 81920, frag size: 2048, nbpi: 4096, compress: LZ)
/dev/lv07 -- /compress512 jfs 81920 rw yes no
(lv size: 81920, fs size: 81920, frag size: 512, nbpi: 4096, compress: LZ)
/dev/lv06 -- /uncompress jfs 81920 rw yes no
(lv size: 81920, fs size: 81920, frag size: 2048, nbpi: 4096, compress: no)
/dev/lv01 -- /frag512 jfs 57344 rw yes no
(lv size: 57344, fs size: 57344, frag size: 512, nbpi: 512, compress: no)
/dev/strlv32k -- /strjfs32k jfs 65536 rw yes no
(lv size: 65536, fs size: 65536, frag size: 4096, nbpi: 4096, compress: no)

Note that the output may appear distorted if your screen is not 90 columns wide. We also included data for /frag512 and /strjfs32k since they will be used later in How to Check the Disk Usage of a Compressed File System.

How to Mount the New JFS

To mount the newly created journaled file systems so that we can use them, rather than use smit, we suggest that it is easier to execute:


# mount /compress
# mount /uncompress
# mount /compress512

However, if you want to use smit, for example to access /compress:

  1. Execute smitty fs.
  2. Select Mount a File System.
  3. Based on the previous output of the lsfs -q command, type /dev/lv05 in the FILE SYSTEM name field, or use the F4=List key to select it.
  4. Again based on the previous output of the lsfs -q command, type /compress in the DIRECTORY over which to mount field, or use the F4=List key to select it so that your screen looks like:
                                  Mount a File System

    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [Entry Fields]
    FILE SYSTEM name [/dev/lv05] +
    DIRECTORY over which to mount [/compress] +
    TYPE of file system +
    FORCE the mount? no +
    REMOTE NODE containing the file system []
    to mount
    Mount as a REMOVABLE file system? no +
    Mount as a READ-ONLY system? no +
    Disallow DEVICE access via this mount? no +
    Disallow execution of SUID and sgid programs no +
    in this file system?





    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

  5. Press the Enter=Do to execute the mount command:
  6. When smit returns an OK message, press the F10=Exit key to return to the command line.
How to Check the Performance of a Compressed File System

We can obtain a simple indication of the performance degradation that we get when we use a compressed journaled file system by recording how long it takes to copy a 20MB file to each of the /compress, /compress512 and the /uncompress journaled file systems. Of course, the exact nature of your data files, their access rate, and other environmental conditions will give you quite different results from the sample values that this example provides. Note we do not use smit to execute these simple commands, so first let's summarize our test, and then discuss it afterwards.

Before we commence our test, we need to check that the underlying logical volume configuration will not result in any bias in the results. We can check the logical volumes by executing:


# lspv -p hdisk0
hdisk0:
PP RANGE STATE REGION LV ID TYPE MOUNT POINT
1-58 free outer edge
59-68 used outer middle lv05 jfs /compress
69-78 used outer middle lv06 jfs /uncompress
79-88 used outer middle lv07 jfs /compress512
89-115 free outer middle
116-172 free center
173-229 free inner middle
230-287 free inner edge

You can see that all 10 physical partitions of each logical volume are on the same disk region (note that the logical volume manager automatically used the empty hdisk0, after the migratepv in How to Use the migratepv Command, to create the new logical volumes). Each copy operation will be done sequentially from the same source file, so that the main reason for the copy time differences is the attributes of each target journaled file system.

To test the copy times, execute:


# timex cp /strjfs16k/bigfile /compress

real 42.70
user 0.27
sys 5.77
# timex cp /strjfs16k/bigfile /uncompress

real 8.71
user 0.20
sys 5.94
# df -kI /compress /uncompress
Filesystem 1024-blocks Used Free %Used Mounted on
/dev/lv05 40960 11188 29772 27% /compress
/dev/lv06 40960 21020 19940 51% /uncompress
# timex cp /strjfs16k/bigfile /compress512

real 39.66
user 0.25
sys 4.31
# ls -lt /compress /uncompress /strjfs16k
/strjfs16k:
total 39328
-rw-r--r-- 1 root sys 20131943 Jul 21 14:51 bigfile
/uncompress:
total 39328
-rw-r--r-- 1 root sys 20131943 Jul 21 14:57 bigfile
/compress:
total 19664
-rw-r--r-- 1 root sys 20131943 Jul 21 14:53 bigfile

First, notice that although the ls output shows that each file is about 20MB big, the df -kI output gives you an idea of how compression can save disk space. This is looked into in more detail in the next section, How to Check the Disk Usage of a Compressed File System.

However, the timex results indicate the performance costs of the disk saving benefits when we used a compressed file system. We can see that it took about 40 seconds to copy bigfile to our compressed file system that uses 512 byte fragments, then about 43 seconds to copy the same file to our compressed file system that uses 2048 byte fragments, but only nine seconds to a non-compressed file system that also uses 2048 byte fragments. Our copy time has increased by about 370% (100 x 34/9), because bigfile was being compressed in /compress by the journaled file system code while the cp command was actually writing the bigfile file. Finally, notice that there was not much difference in our example between our copy to a fragment size of 2048, 43 seconds, compared to when the fragment size was decreased to 512 bytes, which only saved three seconds.

How to Check the Disk Usage of a Compressed File System

To check how much disk space is available initially in the empty journaled file systems, execute:


# df -kI /compress /uncompress /compress512
Filesystem 1024-blocks Used Free %Used Mounted on
/dev/lv05 40960 1332 39628 3% /compress
/dev/lv06 40960 1332 39628 3% /uncompress
/dev/lv07 40960 1344 39616 3% /compress512

Note that the journaled file system with the smaller fragment (512 versus 2048), has more space initially allocated for the journaled file system organizational data (in other words, areas like the journaled file system maps).

If you refer to How to Use Fragments for Disk Usage Efficiency, you will see that we can use the ksh shell scripts mkfile and fragcopy to create files with a size that is a multiple of 512 byte blocks. In this example, we use the file 2560bytefile that consists of five 512 byte blocks. We can then use the du command to see many disk blocks are really used.

If we copy the files using the cp commands given in the command summary, then the following ls command confirms that we have five files that appear to occupy the same amount of disk space.


# ls -l /strjfs32k/fragdata/2560bytefile /frag512/2560bytefile
-rw-r--r-- 1 root sys 2560 Jul 21 16:53 /strjfs32k/fragdata/2560bytefile
-rw-r--r-- 1 root sys 2560 Jul 21 17:09 /frag512/2560bytefile
# ls -l /compress512/2560bytefile /compress/2560bytefile
-rw-r--r-- 1 root sys 2560 Jul 21 16:56 /compress512/2560bytefile
-rw-r--r-- 1 root sys 2560 Jul 21 16:57 /compress/2560bytefile
# ls -l /uncompress/2560bytefile
-rw-r--r-- 1 root sys 2560 Jul 21 16:57 /uncompress/2560bytefile

However, the following du command output reports the true number of 512 byte disk blocks that are actually used by each file:


# du /strjfs32k/fragdata/2560bytefile
8 /strjfs32k/fragdata/2560bytefile
# du /frag512/2560bytefile
5 /frag512/2560bytefile
# du /compress512/2560bytefile
1 /compress512/2560bytefile
# du /compress/2560bytefile
4 /compress/2560bytefile
# du /uncompress/2560bytefile
8 /uncompress/2560bytefile

To correctly interpret this data, we need to recall the journaled file system attributes documented in How to Check the Characteristics of the New JFS.

As expected the source file /strjfs32k/fragdata/2560bytefile requires 8 x 512 = 4096 bytes because /strjfs32k was created with default journaled file system attributes, so it has to use at least one complete 4K fragment to store 2560 bytes. Now the file /uncompress/2560bytefile also uses 4096 bytes, but in this case, it used two 2048 byte fragments. The first fragment is completely full, but the second fragment contains 3 x 512 = 1536 of wasted space that can't be used by any other file.

If we now use compression, our disk space requirements are halved, since we now only require 4 x 512 = 2048 bytes to store the /compress/2560bytefile file. However, this is still using a full fragment or less in /compress which has a fragment size of 2048 bytes. So when we check the space used by /compress512/2560bytefile, we can see that the LZ compression algorithm has actually shrunk the file to 20% (512 / 2560) or less of its original size. This is not too surprising since we know that the shell script mkfile (as given in How to Use Fragments for Disk Usage Efficiency) just creates a file that has the word yes repeated on each line many times. Such a repetitive file is likely to be much easier to compress than your real data files.

Finally, we can see that if performance is not critical, it is wise to combine compression with a low fragment size when you create a journaled file system. The du output for the file /frag512/2560bytefile shows us that for journaled file systems with a 512 byte fragment size, we require, as expected, five fragments to store 2560 bytes. However this is reduced to only one fragment when the journaled file system is also configured at creation to use compression as well as a 512 byte fragment size.

How to Create and Use a JFS Greater than 2GB

This section shows you how to expand an existing journaled file system to a size greater than 2GB, which is the maximum journaled file system size in AIX Version 3. Again, as in the section How to Use JFS Compression and Check its Consequences, we will use the availvg volume group, because it has the most available disk space. We will be increasing the availlv to a new total size of 3GB after removing its mirror copy so that we have enough disk space.

Command Summary

First of all we reduce the number of copies of availlv to 1:


# rmlvcopy 'availlv' '1'

Next we increase the size of the logical volume to 900 logical partitions which equates to 3600MB:


# chlv -x'900' 'availlv'

Finally, we increase the size of the JFS availjfs to 6000000 512 byte blocks which equates to 3GB:


# chfs -a size='6000000' '/availjfs'

Detailed Guidance

How to Remove a Logical Volume Copy

As can be seen from the following:


# lsvg availvg
VOLUME GROUP: availvg VG IDENTIFIER: 000004461ed9e52e
VG STATE: active PP SIZE: 4 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 861 (3444 megabytes)
MAX LVs: 256 FREE PPs: 817 (3268 megabytes)
LVs: 5 USED PPs: 44 (176 megabytes)
OPEN LVs: 5 QUORUM: 1
TOTAL PVs: 3 VG DESCRIPTORS: 3
STALE PVs: 0 STALE PPs 0
ACTIVE PVs: 3 AUTO ON: yes
# lsvg -l availvg
availvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
availlv jfs 6 12 2 open/syncd /availjfs
loglv00 jfslog 1 2 2 open/syncd N/A
lv05 jfs 10 10 1 open/syncd /compress
lv06 jfs 10 10 1 open/syncd /uncompress
lv07 jfs 10 10 1 open/syncd /compress512
#

availlv currently has two mirror copies. However, the availvg volume group is not big enough to hold two mirror copies of the availlv logical volume when it is expanded to 3MB. This would require 6MB in availvg. We currently only have 3268MB available, as indicated by the FREE PPs: field in the second column of the output of the lsvg availvg command.

Hence we can remove one of the mirror copies if we:

  1. Execute the fastpath smitty rmlvcopy to get to the screen with the title Remove Copies from a Logical Volume, or, to go through the smit hierarchy:
    1. Execute smitty.
    2. Select System Storage Management (Physical & Logical Storage).
    3. Select Logical Volume Manager.
    4. Select Logical Volumes.
    5. Select Set Characteristic of a Logical Volume.
    6. Select Remove Copies from a Logical Volume.
  2. Type availlv in the LOGICAL VOLUME name field and press the Enter=Do key, or use the F4=List key to select it.
  3. Use the Tab key to toggle the contents of the field NEW maximum number of logical partition copies from 2 to 1 so that the screen looks like:
                          Remove Copies from a Logical Volume

    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [Entry Fields]
    * LOGICAL VOLUME name availlv
    * NEW maximum number of logical partition 1 +
    copies
    PHYSICAL VOLUME name(s) to remove copies from [] +













    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

    Since one copy of availlv currently exists on each of two identical disks (so that we are protected from disk failure), then we can ignore the third field. However, if one disk is not as fast or reliable as the other, then we may decide to remove the copy of availlv that it contains.

  4. Press the Enter=Do key to remove a copy of availlv.
  5. When smit returns an OK prompt, press the F10=Exit key to return to the command prompt.
How to Change a Logical Volume Copy

Now availlv only exists as a single copy logical volume, and there is sufficient space available in availvg to expand it to 3GB. However, by default, when a logical volume is created, it is limited to a maximum of 128 physical partitions. Since the availvg volume group uses a physical partition size of 4MB, availlv can only grow to 4 x 128 = 512MB. This also means that the /availjfs that uses this logical volume also can not get bigger than 512MB.

Fortunately, you can change the maximum number of logical partitions in a logical volume if you:

  1. Execute the fast path smitty chlv1 to get to the menu with the title Change a Logical Volume, or, to go through the smit hierarchy:
    1. Execute smitty.
    2. Select System Storage Management (Physical & Logical Storage).
    3. Select Logical Volume Manager.
    4. Select Logical Volumes.
    5. Select Set Characteristic of a Logical Volume.
    6. Select Change a Logical Volume.
  2. Type availlv in the LOGICAL VOLUME name field and press the Enter=Do key, or use the F4=List key to select it.
  3. Type 900 in the MAXIMUM NUMBER of LOGICAL PARTITIONS field.

    This enables us to increase the size of /availjfs journaled file system that uses the availlv logical volume, up to 4 x 900 = 3600MB. Hence we still allow for a growth of 600MB beyond our initial 3GB objective so that we can increase /availjfs without having to first change the MAXIMUM NUMBER of LOGICAL PARTITIONS (unless, of course we wanted to increase /availjfs by another 1GB).

  4. We are not currently concerned about the other fields so we'll leave them with default values so that the screen should look like:
                                Change a Logical Volume

    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [TOP] [Entry Fields]
    * Logical volume NAME availlv
    Logical volume TYPE [jfs]
    POSITION on physical volume outer_middle +
    RANGE of physical volumes maximum +
    MAXIMUM NUMBER of PHYSICAL VOLUMES [32] #
    to use for allocation
    Allocate each logical partition copy yes +
    on a SEPARATE physical volume?
    RELOCATE the logical volume during reorganization? yes +
    Logical volume LABEL [/availjfs]
    MAXIMUM NUMBER of LOGICAL PARTITIONS [900]
    SCHEDULING POLICY for writing logical parallel +
    partition copies
    PERMISSIONS read/write +

    [MORE...3]

    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

  5. Press the Enter=Do to change this logical volume.
  6. When smit returns with an OK prompt, press the F10=Exit to return to the command line.
How to Increase the Size of a JFS

Now that we have ensured that the availlv logical volume can accommodate a 3GB journaled file system, we can increase the /availjfs journaled file system.

To increase /availjfs so that it is a total of 3GB:

  1. Execute the fast path smitty chjfs to get to a screen like:
     
    ____________________________________________________________________________
    | |
    | File System Name |
    | |
    | Move cursor to desired item and press Enter. |
    | |
    | [TOP] |
    | / |
    | /home |
    | /usr |
    | /var |
    | /tmp |
    | /newfs |
    | /availjfs |
    | /strjfs32k |
    | /frag512 |
    | /frag4096 |
    | /frag512-1 |
    | [MORE...9] |
    | |
    | F1=Help F2=Refresh F3=Cancel |
    | F8=Image F10=Exit Enter=Do |
    | /=Find n=Find Next |
    |____________________________________________________________________________|

    Or, to go through the smit hierarchy to get to the above selection screen:

    1. Execute smitty.
    2. Select System Storage Management (Physical & Logical Storage).
    3. Select File Systems.
    4. Select Add / Change / Show / Delete File Systems.
    5. Select Journaled File Systems.
    6. Select Change / Show Characteristics of a Journaled File System.
  2. Use the Down Arrow to move the cursor so that the journaled file system /availjfs is highlighted.
  3. Press the Enter=Do key.
  4. Type 6000000 in the SIZE of file system (in 512-byte blocks) field, since 6000000 x 512 = 3000000000, or more simply, 3GB.

    Since this is the only field we need to change, the screen should like:


                 Change/Show Characteristics of a Journaled File System


    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.

    [Entry Fields]
    File system name /availjfs
    NEW mount point [/availjfs]
    SIZE of file system (in 512-byte blocks) [6000000]
    Mount GROUP []
    Mount AUTOMATICALLY at system restart? yes +
    PERMISSIONS read/write +
    Mount OPTIONS [] +
    Start Disk Accounting? no +
    Fragment Size (bytes) 4096
    Number of bytes per inode 4096
    Compression algorithm no




    F1=Help F2=Refresh F3=Cancel F4=List
    F5=Reset F6=Command F7=Edit F8=Image
    F9=Shell F10=Exit Enter=Do

  5. Press the Enter=Do key to change the journaled file system.
  6. When the command is complete, your screen should look like:
                                     COMMAND STATUS

    Command: OK stdout: yes stderr: no

    Before command completion, additional instructions may appear below.

    File System size changed to 6004736















    F1=Help F2=Refresh F3=Cancel F6=Command
    F8=Image F9=Shell F10=Exit /=Find
    n=Find Next

  7. Press the F10=Exit key to return to the command prompt.
How to Check the Attributes of a JFS greater than 2GB

Now that we have increased /availjfs to a total size of 3GB, we have also forced the availlv logical volume that the /availjfs journaled file system uses to increase to 3GB. We can check how availvg and availlv have changed by executing the following commands:


# lsvg availvg
VOLUME GROUP: availvg VG IDENTIFIER: 000004461ed9e52e
VG STATE: active PP SIZE: 4 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 861 (3444 megabytes)
MAX LVs: 256 FREE PPs: 96 (384 megabytes)
LVs: 5 USED PPs: 765 (3060 megabytes)
OPEN LVs: 5 QUORUM: 1
TOTAL PVs: 3 VG DESCRIPTORS: 3
STALE PVs: 0 STALE PPs 0
ACTIVE PVs: 3 AUTO ON: yes
# lsvg -l availvg
availvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
availlv jfs 733 733 3 open/syncd /availjfs
loglv00 jfslog 1 2 2 open/syncd N/A
lv05 jfs 10 10 1 open/syncd /compress
lv06 jfs 10 10 1 open/syncd /uncompress
lv07 jfs 10 10 1 open/syncd /compress51
#

We can clearly see that the availvg volume group now uses 3060MB, with only 384MB that can currently be allocated, from the existing physical volumes in this volume group, to new or existing logical volumes in this volume group. Also, we note that the availlv logical volume now uses 733 4MB physical partitions, which is approximately 4 x 733 = 2932MB (remember that 1MB may be 1048576 bytes or 1000000 bytes, depending upon the context in which it is used, so rounding errors can become significant).

To verify that we can actually use all of this space, we again used the fragcopy script from How to Use Fragments for Disk Usage Efficiency, to copy a 21MB file many times.


# ls -l /availjfs/cmds.rom.dd
-rw-r--r-- 1 root sys 21805056 Jul 8 12:23 /availjfs/cmds.rom
# ksh fragcopy 140 /availjfs/cmds.rom.dd /availjfs/cmds.rom.dd &
# ls -l /availjfs
total 72608
-rw-r--r-- 1 root sys 21805056 Jul 8 12:23 cmds.rom.dd
-rw-r--r-- 1 root sys 15368192 Jul 21 18:50 cmds.rom.dd.0
cp: /availjfs/cmds.rom.dd.135: No space left on device
# df -kI /availjfs
Filesystem 1024-blocks Used Free %Used Mounted on
/dev/availlv 3002368 3002368 0 100% /availjfs
#
# ls /availjfs |wc -l
137
# ls -lt /availjfs|more
total 5808264
-rw-r--r-- 1 root sys 8065024 Jul 21 19:40 cmds.rom.dd.135
-rw-r--r-- 1 root sys 21805056 Jul 21 19:40 cmds.rom.dd.134
-rw-r--r-- 1 root sys 21805056 Jul 21 19:39 cmds.rom.dd.133

From the results of the ls commands, we can see that there was enough room for 134 + 1 = 135 complete new copies of the original 21MB file before we filled the /availjfs journaled file system. The full journaled file system is indicated by both the cp command error, and the output of the df -kI command.

This example shows that AIX Version 4 allows us to successfully use journaled file systems that have a capacity greater than the AIX Version 3 limit of 2GB. Although we will not go into the details here, you can also specify a size greater than 2GB when you initially create the logical volume and its associated journaled file system.

Migrating to AIX Version 4

This section discusses and shows how the logical volume manager and journaled file system configuration of an existing AIX Version 3 system can be maintained when it is upgraded to AIX Version 4 by a migration installation.

The actual migration installation is discussed in depth in the AIX Version 4.1 Installation Guide, which is the essential companion to this section.

Command Line Summary

Ensure that you have documented your storage organization prior to migration so that you can confirm the process occurs successfully. Remember to take adequate backups (as described in Managing Backup and Restore), before any reorganization.

Use the following command for each physical volume to establish partition allocation:


# lspv -M hdiskx > map.hdiskx

This will store the output for hdiskx in the file map.hdiskx. Use the following commands for volume groups to record the configuration:


# lsvg volume_group > lsvg-volume_group
# lsvg -l volume_group > lsvg-l-volume_group

This will store information regarding volume group organization and the logical volumes contained within them. Lastly record information about the file systems:


# df > df.fs
# cat /etc/filesystems > fs

This will save information regarding the file systems and their configuration.

Given that all other planning and organizational tasks have been performed (see AIX Version 4.1 Installation Guide), we can now restart our system with the AIX Version 4 installation media loaded and the key in the service position. Selecting the migration option from the Installation and Settings menu will cause a migration to AIX Version 4 to occur. See the detailed guidance section in this chapter for more information.

Lastly, once the migration has successfully taken place, we can confirm that our storage organization is as we expected it to be. Essentially, we can perform the same documentation tasks that we instituted at the start of this process, and then compare the results.

Detailed Guidance

Our migration test uses a graphical console display and CD-ROM AIX Version 4 installation media. As well as the important prerequisite of having a good backup of whatever operating system and data is on the target disks, it is also highly advisable to have the current configuration documented. For this example, documenting the AIX V3.2.5 logical volume manager and journaled file system configuration also shows how this is preserved during the migration to AIX Version 4.

How to Document AIX Version 3.2.5 before a Migration
  1. Logical volume manager configuration

    Our example system has three physical volumes organized in two volume groups that are rootvg and 325vg. The exact disk partition map can be saved to a separate map file for each disk by executing the following familiar sequence of commands:


    # lspv -M hdisk0 > map.hdisk0
    # lspv -M hdisk1 > map.hdisk1
    # lspv -M hdisk2 > map.hdisk2

    Your file format should be similar to the following example for hdisk2:
    hdisk2:1-17
    hdisk2:18 loglv00:1
    hdisk2:19 lv00:1
    hdisk2:20 lv00:2
    hdisk2:21 lv00:3
    hdisk2:22 lv00:4
    hdisk2:23 lv00:5
    hdisk2:24 lv00:6
    hdisk2:25 lv00:7
    hdisk2:26 lv00:8
    hdisk2:27 lv00:9
    hdisk2:28 lv00:10
    hdisk2:29-84

    To summarize the logical volume manager information, we can use many of the commands discussed in
    Storage Management Files and Commands Summary. In our example, we use the following sequence of lsvg commands:
    # lsvg rootvg > lsvg-rootvg
    # lsvg -l rootvg > lsvg-l-rootvg
    # lsvg 325vg > lsvg-325vg
    # lsvg -l 325vg > lsvg-l-325vg

    We can check the contents of these configuration files by executing the following sequence of cat commands:
    # cat lsvg-rootvg
    VOLUME GROUP: rootvg VG IDENTIFIER: 000005083df45081
    VG STATE: active PP SIZE: 4 megabyte(s)
    VG PERMISSION: read/write TOTAL PPs: 243 (972 megabytes)
    MAX LVs: 256 FREE PPs: 8 (32 megabytes)
    LVs: 16 USED PPs: 235 (940 megabytes)
    OPEN LVs: 12 QUORUM: 2
    TOTAL PVs: 2 VG DESCRIPTORS: 3
    STALE PVs: 0 STALE PPs 0
    ACTIVE PVs: 2 AUTO ON: yes
    #
    # cat lsvg-l-rootvg
    rootvg:
    LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
    hd6 paging 10 10 1 open/syncd N/A
    hd61 paging 10 10 1 open/syncd N/A
    hd5 boot 2 2 1 closed/syncd /blv
    hd7 sysdump 2 2 1 open/syncd /mnt
    hd8 jfslog 1 1 1 open/syncd N/A
    hd4 jfs 2 2 1 open/syncd /
    hd2 jfs 76 76 2 open/syncd /usr
    hd1 jfs 1 1 1 open/syncd /home
    hd3 jfs 82 82 2 open/syncd /tmp
    hd9var jfs 31 31 1 open/syncd /var
    hdag1 lfs 2 2 1 closed/syncd N/A
    dumpfiles jfs 5 5 1 open/syncd /var/adm/ras
    agroot lfs 1 1 1 closed/syncd N/A
    tmpvar jfs 1 1 1 closed/syncd N/A
    varrpc jfs 5 5 1 open/syncd /var/dce/rpc/socket
    xmconsole jfs 4 4 1 open/syncd /tmp/xm
    #
    # cat lsvg-325vg
    VOLUME GROUP: 325vg VG IDENTIFIER: 00011605f67f40e9
    VG STATE: active PP SIZE: 4 megabyte(s)
    VG PERMISSION: read/write TOTAL PPs: 84 (336 megabytes)
    MAX LVs: 256 FREE PPs: 73 (292 megabytes)
    LVs: 2 USED PPs: 11 (44 megabytes)
    OPEN LVs: 2 QUORUM: 2
    TOTAL PVs: 1 VG DESCRIPTORS: 2
    STALE PVs: 0 STALE PPs 0
    ACTIVE PVs: 1 AUTO ON: yes
    #
    # cat lsvg-l-325vg
    325vg:
    LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
    loglv00 jfslog 1 1 1 open/syncd N/A
    lv00 jfs 10 10 1 open/syncd /325jfs

    The above information shows us that rootvg has two physical volumes in it and that it contains some logical volumes for an application (DCE). We can also see that there is one data logical volume in the 325vg that contains one physical volume.

  2. JFS configuration

    This is easier to record; in this example, we saved a copy of the file /etc/filesystems, and we then saved the output of the df command to a file named df.I. We can check its output by executing:


    # cat df.I
    Filesystem Total KB used free %used Mounted on
    /dev/hd4 8192 6520 1672 79% /
    /dev/hd9var 126976 13144 113832 10% /var
    /dev/hd2 311296 299168 12128 96% /usr
    /dev/hd3 335872 76472 259400 22% /tmp
    /dev/hd1 4096 496 3600 12% /home
    /dev/dumpfiles 20480 11120 9360 54% /var/adm/ras
    /dev/varrpc 20480 928 19552 4% /var/dce/rpc/socket
    /dev/xmconsole 16384 544 15840 3% /tmp/xm
    /dev/lv00 40960 1380 39580 3% /325jfs

Installing AIX Version 4

Now that we have completed the documentation and all the prerequisites documented in AIX Version 4.1 Installation Guide in the chapter called Installing BOS from CD-ROM or Tape, we can continue the process described in the Start the System section in the same chapter.

After we select English as an Installation language, and press the Enter key, we arrive at the Welcome to Base Operating System Installation and Maintenance menu. The simplest choice is to now select >>> 1 Installation and Settings so that we can check and change the installation options. In our example, the defaults in the following screen were suitable.


                                Installation and Settings

Either type 0 and press Enter to install with current settings, or type the
number of the setting you want to change and press Enter.

1 System Settings:
Method of Installation.............Migration
Disk Where You Want to Install.....hdisk0...

2 Primary Language Environment Settings (AFTER Install):
Cultural Convention................English (United States)
Language ..........................English (United States)
Keyboard ..........................English (United States)


3 Install Trusted Computing Base.......No

>>> 0 Install AIX with the current settings listed above.

+-----------------------------------------------------
88 Help ? | WARNING: Base Operating System Installation will
99 Previous Menu | destroy or impair recovery of SOME data on the
| destination disk hdisk0.
>>> Choice [0]:

However, if you need to change any of the above values, please refer to the references given in Verify the Default Installation and System Settings in the chapter Installing BOS from CD-ROM or Tape in AIX Version 4.1 Installation Guide. In particular, we found in AIX Version 4.1 Installation Guide. In particular, we found that as we checked the physical volume allocation by following the procedure in Change the Destination Disk in the same chapter, the AIX Version 4 installation process correctly recognized which physical volumes belong to the rootvg, and which belong to other volume groups, based on their unique SCSI addresses. We can now complete the migration installation of AIX Version 4 by following the procedure in the section Install from CD-ROM or Tape in the same chapter.

How to Check the Configuration after Migration

When the system reboots after AIX Version 4 is installed, there may be many systems management tasks for the systems administrator to complete. In our example, we want to quickly check our storage configuration details. We can again generate a set of configuration files as we did before the migration by executing the following set of similar commands:


# lspv -M hdisk0 > map.hdisk0.41
# lspv -M hdisk1 > map.hdisk1.41
# lspv -M hdisk2 > map.hdisk2.41
# lsvg rootvg > lsvg-rootvg.41
# lsvg -l rootvg > lsvg-l-rootvg.41
# lsvg 325vg > lsvg-325vg.41
# lsvg -l 325vg > lsvg-l-325vg.41
# df -Ik > df.Ik.41

We can quickly use the diff command on the map files to verify that our partition map has been maintained. Although diff for the hdisk0 and hdisk2 files gives no output , as expected diff for hdisk1 results in the following:


# diff map.hdisk1 map.hdisk1.41
11c11,14
< hdisk1:11-17
---
> hdisk1:11 hd2:77
> hdisk1:12 hd2:78
> hdisk1:13 hd2:79
> hdisk1:14-17

This shows that /usr journaled file system on the hd2 logical volume has grown by three physical partitions during the migration installation. The increase in the rootvg volume group is also verified by the change in the lsvg rootvg output as follows:


# diff lsvg-rootvg lsvg-rootvg.41
4,6c4,6
< MAX LVs: 256 FREE PPs: 8 (32 megabytes)
< LVs: 16 USED PPs: 235 (940 megabytes)
< OPEN LVs: 12 QUORUM: 2
---
> MAX LVs: 256 FREE PPs: 5 (20 megabytes)
> LVs: 16 USED PPs: 238 (952 megabytes)
> OPEN LVs: 11 QUORUM: 2

You can see how the diff command used on configuration files can quickly help us isolate any configuration changes, especially when the output files are large. However, you need to carefully check its output because the differences may be irrelevant. For example, the command diff lsvg-l-325vg lsvg-l-325vg.41 suggests that the entire 325vg logical volume configuration has changed. Close inspection reveals that in our example, the only change is that the position of the output columns, starting with PVs, has been moved.

Finally, we need to check the configuration of the journaled file system in AIX Version 4. Although the command diff filesystems filesystems.41 has no output and so none of the journaled file system attributes recorded in it have changed, we should check the space utilization of the journaled file systems by executing the command


# df -kI
Filesystem 1024-blocks Used Free %Used Mounted on
/dev/hd4 8192 6904 1288 84% /
/dev/hd2 323584 315948 7636 97% /usr
/dev/hd9var 126976 12404 114572 9% /var
/dev/hd3 335872 15656 320216 4% /tmp
/dev/hd1 4096 496 3600 12% /home
/dev/dumpfiles 20480 11128 9352 54% /var/adm/ras
/dev/varrpc 20480 928 19552 4% /var/dce/rpc/socket
/dev/lv00 40960 1384 39576 3% /325jfs

This verifies that only the /usr journaled file system has had more space allocated to it and used, but the output also shows that other journaled file systems have changes in the number of 1024-blocks used due to a variety of factors that may include:

Note that just like the lsvg -l command, the output format has changed so we do not benefit from the diff command. In AIX Version 4, the default block size used by df is 512 bytes, so we need to use the -k flag to report the output using 1024 bytes which is the default value for AIX Version 3.

Overall Effects of Migration

Our example shows that there are no major storage management compatibility issues involved during a migration from AIX V3.2.5 to AIX Version 4. However, of course we cannot use any new AIX Version 4 features, such as journaled file system compression, on an existing journaled file system that was migrated from AIX Version 3 unless we recreate it and restore its data. As well as this, we must also be aware of any other migration issues, such as those discussed in AIX Version 4.1 Installation Guide, in particular in the section Compatibility Between AIX Version 3.2 and AIX Version 4.1.

Manipulating Page Space

This section shows you how to implement the most common maintenance tasks for your paging logical volumes. The examples manipulate the hd6 logical volume in AIX Version 4 that contains two mirror copies. However, you can easily apply the procedures described here for other paging devices, where, depending on their attributes, you may not need to:

For this section, it is very beneficial for the reader to become familiar with the concepts discussed in:


**** Warning - Reboots required ****

It is important to note that paging devices can not be deactivated, so any maintenance task that requires this, such as the removal of a paging logical volume, will have to be done at an appropriate time to minimize user disruption. This is probably a helpful limitation, since it reminds us that any system maintenance task must be carefully scheduled to help you cope in case there are any disasters, foreseen or unforeseen, during your maintenance work.


How to Decrease the Default hd6 Paging Logical Volume

This next section looks at reducing the size of the hd6 default paging space logical volume.

Command Line Summary

This example demonstrates the tasks required to reduce the size of a paging space. In particular, it provides the extra steps required in the more complex scenario where we want to decrease the default rootvg paging logical volume, hd6, when it is part of a mirrored rootvg. First, let's look at the paging spaces that we have:
# lsps -a
Page Space Physical Volume Volume Group Size %Used Active Auto Type
perfpg hdisk1 perfvg 20MB 0 no yes lv
perfpg hdisk8 perfvg 20MB 0 no yes lv
hd6 hdisk5 rootvg 32MB 22 yes yes lv
hd6 hdisk7 rootvg 32MB 22 yes yes lv

This shows us the size of the paging space that we are interested in, as well as the current paging space usage. Next let's see how much memory we have:


# lsattr -E -l sys0 -a realmem
realmem 49152 Amount of usable physical memory in Kbytes False

This gives us a basis for calculating how big page space needs to be, more information being presented in the detailed guidance section. Before we alter this though, we must create another temporary page space as it would be a bad idea to be without it:


# mkps -a -n -s 20 rootvg
paging00

This makes a new paging space of size 80MB. Next we make hd6 inactive, and cause paging00 to be used at the next reboot:


# chps -a n hd6
#
edit the one entry in /sbin/rc.boot... search via /swapon to find line near
# Start paging if no dump
[ ! -f /needcopydump ] && swapon /dev/paging00
# bosboot -l /dev/hd5x -d /dev/hdisk7 -a
# bosboot: Boot image is 4275 512 byte blocks.
# bosboot -l /dev/hd5 -d /dev/hdisk5 -a
# bosboot: Boot image is 4275 512 byte blocks.

We also recreate the boot images to reflect the change and then reboot. Now we can remove and recreate the hd6 page space to be the size that we wish. First ensure that the dump device is not hd6:


# sysdumpdev -pP /dev/sysdumpnull
# sysdumpdev -l
primary /dev/sysdumpnull
secondary /dev/sysdumpnull
copy directory /tmp
forced copy flag TRUE

Then remove hd6, recreate the two mirror copies with the new required size, then activate it:


# rmps hd6
# mklv -y'hd6' -e'x' -c'2' -v'y' 'rootvg' '7'
# swapon /dev/hd6

The rc.boot file must now be edited again to reflect the new page space, and the boot images again updated:


edit the one entry in /sbin/rc.boot... search via /swapon to find line near
# Start paging if no dump
[ ! -f /needcopydump ] && swapon /dev/hd6
# bosboot -l /dev/hd5x -d /dev/hdisk7 -a
# bosboot: Boot image is 4275 512 byte blocks.
# bosboot -l /dev/hd5 -d /dev/hdisk5 -a
# bosboot: Boot image is 4275 512 byte blocks.

Now reboot the system and then update the bootlist:


# shutdown -Fr
# bootlist -m normal hdisk5 hdisk7

Finally, remove the temporary page space:


# chps -a n paging00
# rmps paging00

Don't forget to change the system dump device back if required.

Detailed Guidance

This task has a number of component steps that also serve to illustrate general paging space management tasks. Thus, although this entire section is based on the one procedure given in the AIX Version 4.1 System Management Guide: Operating System and Devices article Resizing or Moving the hd6 Paging Space, we do this procedure in the following subsections.

How to Check Prerequisites before Changing hd6

The first task is that you need to properly understand your system's performance so that you not remove too much capacity from your paging logical volumes. In this example, we can execute the following commands to give us a current snapshot:


# lsps -a
Page Space Physical Volume Volume Group Size %Used Active Auto Type
perfpg hdisk1 perfvg 20MB 0 no yes lv
perfpg hdisk8 perfvg 20MB 0 no yes lv
hd6 hdisk5 rootvg 32MB 22 yes yes lv
hd6 hdisk7 rootvg 32MB 22 yes yes lv
#
# lsattr -E -l sys0 -a realmem
realmem 49152 Amount of usable physical memory in Kbytes False
#

You can see that we are currently using only about 7MB of our paging space, even though we have 49MB of RAM.


**** Warning - Understand performance ****

It is very important to be familiar with the issues discussed in the AIX V3.2 Performance Monitoring and Tuning Guide (which may be available in the AIX Version 4.1 Hypertext Information Base Library on your system) before you decide exactly how much paging space you need.

This decision depends on many factors, such as application needs and the number of users.


In this example, we want to maintain the amount of paging space suggested by the following rule of thumb:

  • Use a 1:1 ratio of total paging space to system RAM.
  • This means that we only want to decrease hd6 by 4MB so that we will still have a total of 48MB of paging space.

    As is described in Managing Physical Volumes, there are many ways to check the physical partition layout and usage in a volume group. To check rootvg, we can execute:


    # lspv -l hdisk5
    hdisk5:
    LV NAME LPs PPs DISTRIBUTION MOUNT POINT
    hd5 1 1 01..00..00..00..00 N/A
    hd6 8 8 00..08..00..00..00 N/A
    hd8 1 1 00..00..01..00..00 N/A
    hd4 1 1 00..00..01..00..00 /
    hd2 50 50 00..00..29..21..00 /usr
    hd9var 3 3 00..00..00..03..00 /var
    hd3 3 3 00..00..00..03..00 /tmp
    hd1 1 1 00..00..00..01..00 /home
    # lspv -l hdisk7
    hdisk7:
    LV NAME LPs PPs DISTRIBUTION MOUNT POINT
    hd2 50 50 02..00..14..17..17 /usr
    hd9var 3 3 03..00..00..00..00 /var
    hd3 3 3 03..00..00..00..00 /tmp
    hd1 1 1 01..00..00..00..00 /home
    hd5x 2 2 02..00..00..00..00 N/A
    hd6 8 8 00..08..00..00..00 N/A
    hd8 1 1 00..00..01..00..00 N/A
    hd4 1 1 00..00..01..00..00 /
    #

    We can clearly see that rootvg is mirrored in a high availability configuration to help protect us from disk failure. So we know that we will gain a free, unallocated physical partition, on each physical volume when we decrease hd6 by one logical partition.

    We also know from the lsdev -Cc disk command that hdisk7 is only a 355MB disk, and so is almost full, whereas hdisk5 is a 670MB disk, and so has many more free physical partitions, particularly near its edges (note the first and last columns are mainly 00 under the heading DISTRIBUTION above).

    Based on this information, we have decided to create a non-mirrored temporary paging space. We could make it small, but we'll follow the suggestion in the Resizing or Moving the hd6 Paging Space article, and make it 80MB in size, since we know that we have enough space.

    Finally, note that it is not good enough to simply activate the perfpg paging logical volume to use it while we work with hd6, because perfpg is in the perfvg volume group. The AIX Version 3 and AIX Version 4 boot processes expect the hd6 logical volume to be in the root volume group if hd6 is used, since this volume group is the first one that is accessed during the boot process.

    How to Add a New Paging Logical Volume to a Volume Group

    Now that we've checked the rootvg volume group, we need to create another paging device to temporarily use as the main system paging device while we work with hd6. The process described here is very similar to that documented in the article Adding and Activating a Paging Space. Note that we could create a paging type logical volume. However, this process has already been described in How to Create a Paging Type Logical Volume, and we are not concerned about the physical partition location here so we can use the mkps command.

    To create a new paging logical volume:

    1. Execute the fast path smitty mkps to get to the screen that looks like:
       









      _________________________________________________________________________
      | |
      | VOLUME GROUP name |
      | |
      | Move cursor to desired item and press Enter. |
      | |
      | availvg |
      | rootvg |
      | perfvg |
      | stripevg |
      | |
      | F1=Help F2=Refresh F3=Cancel |
      | F8=Image F10=Exit Enter=Do |
      | /=Find n=Find Next |
      |_________________________________________________________________________|

      Alternatively, you can go through the smit hierarchy by:

      1. Executing smitty.
      2. Selecting System Storage Management (Physical & Logical Storage).
      3. Selecting Logical Volume Manager.
      4. Selecting Paging Space.
      5. Selecting Add Another Paging Space to get to a screen that prompts you to select a volume group from a menu similar to that shown above.
    2. Use the Arrow keys to highlight the rootvg volume group name, and then press the Enter=Do key.
    3. Type 20 for the field SIZE of paging space (in logical partitions), 20 times 4MB gives us an 80MB temporary paging logical volume.
    4. Use the Tab key to toggle the field Start using this paging space NOW? from no to to yes, or use the F4=List key to select it.
    5. Use the Tab key to toggle the field Use this paging space each time the system is RESTARTED? from no to yes so that your screen looks like:
                                  Add Another Paging Space

      Type or select values in entry fields.
      Press Enter AFTER making all desired changes.

      [Entry Fields]
      Volume group name rootvg
      SIZE of paging space (in logical partitions) [20] #
      PHYSICAL VOLUME name +
      Start using this paging space NOW? yes +
      Use this paging space each time the system is yes +
      RESTARTED?










      F1=Help F2=Refresh F3=Cancel F4=List
      F5=Reset F6=Command F7=Edit F8=Image
      F9=Shell F10=Exit Enter=Do

    6. Press the Enter=Do to create the temporary paging logical volume.
    7. When smit returns the device name, such as paging00 in this example, with smit's OK prompt, you can press the F10=Exit key to return to the command line.

    You can now use the command lsps -a to check that this new device is active. To use smit:

    1. Execute smitty.
    2. Select System Storage Management (Physical & Logical Storage).
    3. Select Logical Volume Manager.
    4. Select Paging Space.
    5. Select List All Paging Spaces.
    6. When smit returns the lsps -a output, with smit's OK prompt, you can press the F10=Exit key to return to the command line.
    How to Change the Attributes of a Paging Logical Volume

    We now need to change the hd6 logical volume's boot attributes so that we can remove it. This change example is based on that described in the AIX Version 4.1 Hypertext Information Base Library article Changing or Removing a Paging Space.

    To change the hd6 paging logical volume:

    1. Execute the fast path smitty chps to get to a PAGING SPACE name prompt in a screen like:
       










      _________________________________________________________________________
      | |
      | PAGING SPACE name |
      | |
      | Move cursor to desired item and press Enter. |
      | |
      | paging00 |
      | perfpg |
      | hd6 |
      | |
      | F1=Help F2=Refresh F3=Cancel |
      | F8=Image F10=Exit Enter=Do |
      | /=Find n=Find Next |
      |_________________________________________________________________________|

      Alternatively, you can go through the smit hierarchy by:

      1. Executing smitty.
      2. Selecting System Storage Management (Physical & Logical Storage).
      3. Selecting Logical Volume Manager.
      4. Selecting Paging Space.
      5. Selecting Change / Show Characteristics of a Paging Space to get to a screen that prompts you to select a paging logical volume from a menu similar to that shown above.
    2. Use the Arrow keys to highlight the hd6 paging space name, and then press the Enter=Do key.
    3. Use the Tab key to toggle the field Use this paging space each time the system is RESTARTED? from yes to no so that your screen looks like:
                      Change / Show Characteristics of a Paging Space

      Type or select values in entry fields.
      Press Enter AFTER making all desired changes.

      [Entry Fields]
      Paging space name hd6
      Volume group name rootvg
      Physical volume name hdisk5
      NUMBER of additional logical partitions [] #
      Use this paging space each time the system is no +
      RESTARTED?










      F1=Help F2=Refresh F3=Cancel F4=List
      F5=Reset F6=Command F7=Edit F8=Image
      F9=Shell F10=Exit Enter=Do

    4. Press the Enter=Do to change the hd6 paging logical volume.
    5. When smit returns an OK prompt, you can press the F10=Exit key to return to the command line.
    How to Complete the Steps to Rebuild a Smaller hd6

    Although we have changed hd6, it is still an active logical volume (you can see that its LV STATE is open/syncd if you execute the lsvg -l rootvg command). Hence we must use the command shutdown -Fr to reboot the RISC System/6000, but we must first modify the boot file that explicitly references a paging logical volume that is called hd6. We also need to save our changes so far, to the boot logical volumes.

    We need to edit the file /sbin/rc.boot. In this example we used the vi editor, so to use vi:

    1. Execute vi /sbin/rc.boot.
    2. Type the characters /swapon and press the Enter key to find the relevant line.
    3. Press the character w four times while in command mode to move the cursor to a position under the character h in the word hd6.
    4. Press the characters cw to change the word hd6.
    5. Type the word paging00 so that the lines look like:
                              # Start paging if no dump
      [ ! -f /needcopydump ] && swapon /dev/paging00

    6. Press the Esc key to return to command mode.
    7. Use the capital zz sequence, in other words type the characters ZZ, to save our updated rc.boot file.

    Now if we try to update the boot disk as suggested in the article Resizing or Moving the hd6 Paging Space in InfoExplorer, we will get the following error:


    # bootinfo -b
    hdisk7
    # bosboot -d /dev/hdisk7 -a
    0301-168 bosboot: The current boot logical volume, /dev/hd5,
    does not exist on /dev/hdisk7.

    Since we have a mirrored rootvg with two boot logical volumes on different disks, then each boot image needs to be updated with the following commands:


    # bosboot -l /dev/hd5x -d /dev/hdisk7 -a
    bosboot: Boot image is 4275 512 byte blocks.
    # bosboot -l /dev/hd5 -d /dev/hdisk5 -a
    bosboot: Boot image is 4275 512 byte blocks.

    Now we can reboot the RISC System/6000 by executing the command shutdown -Fr.

    When the system is up, you can login to check that hd6 can now be removed by executing the command:


    # lsvg -l rootvg |head
    rootvg:
    LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
    hd6 paging 8 16 2 closed/syncd N/A
    hd5 boot 1 1 1 closed/syncd N/A

    We can now remove the hd6 logical volume.


    **** Warning - Check dump device ****

    If you are using AIX Version 4, then you need to check what your dump device is by executing the command sysdumpdev -l. If your command ouput looks like:


    # sysdumpdev -l
    primary /dev/hd6
    secondary /dev/sysdumpnull
    copy directory /tmp
    forced copy flag TRUE

    then your hd6 logical volume will still be in an open state and hence cannot be removed. If your primary dump device is hd6, then you can change it by executing the following command (note that you can get to the equivalent smit menus for these commands by executing smitty sysdumpdev):
    # sysdumpdev -Pp /dev/sysdumpnull
    primary /dev/sysdumpnull
    secondary /dev/sysdumpnull
    copy directory /tmp
    forced copy flag TRUE


    Now follow the procedure described in How to Remove a Paging Logical Volumeto delete the hd6 logical volume.

    We can now recreate hd6 to make it only 28MB in size, instead of its original 32MB, to save 4MB of disk space.

    The procedure is almost identical to that described in A Design Example for Improved Availability, where we created availlv. The only differences are:

    Note that this command is different to the mklv command in the procedure suggested by the Resizing or Moving the hd6 Paging Space article, since that example only shows you how to work with a single copy hd6. Also note that we are not following a procedure similar to that described in How to Add a New Paging Logical Volume to a Volume Group, because it would require an extra step to change the name of the new paging logical volume from a name like paging01, to hd6.

    Now that hd6 exists again, we need to activate it, so:

    1. Execute the fast path smitty swapon to get to a menu with the title Activate a Paging Space. Alternatively, you can go through the smit hierarchy by:
      1. Executing smitty.
      2. Selecting System Storage Management (Physical & Logical Storage).
      3. Selecting Logical Volume Manager.
      4. Selecting Paging Space.
      5. Selecting Activate a Paging Space to get to the same menu.
    2. Press the F4=List to generate a list of paging logical volumes.
    3. Use the Arrow keys to highlight the hd6 logical volume name, and then press the Enter=Do key twice.
    4. When smit returns an OK prompt, press the F10=Exit to return to the command line.

    Now that hd6 is active, we need to:

    1. Reverse the previous change to the /sbin/rc.boot file.
    2. Repeat the command bosboot -l /dev/hd5x -d /dev/hdisk7 -a.
    3. Repeat the command bosboot -l /dev/hd5 -d /dev/hdisk5 -a.
    4. Execute shutdown -Fr to reboot the RISC System/6000.
    5. When the system comes up, execute the bootlist -m normal hdisk5 hdisk7 command to check that we can still boot from either rootvg mirror copy.

    At this stage, your system is almost back to normal and your paging information should look like:


    # lsps -a
    Page Space Physical Volume Volume Group Size %Used Active Auto Type
    paging00 hdisk5 rootvg 80MB 0 yes yes lv
    perfpg hdisk1 perfvg 20MB 8 yes yes lv
    perfpg hdisk8 perfvg 20MB 8 yes yes lv
    hd6 hdisk5 rootvg 28MB 22 yes no lv
    hd6 hdisk7 rootvg 28MB 21 yes no lv

    This verifies that we have recovered one logical partition, which is two physical partitions (8MB) from the hd6 logical volume, so we can now remove the temporary paging00 logical volume.

    How to Remove a Paging Logical Volume

    Now that the smaller hd6 logical volume has been returned to its original operating conditions, we can follow the process described in the article Changing or Removing a Paging Space to remove our temporary logical volume.

    To remove the paging device paging00:

    1. Follow the procedure given in How to Change the Attributes of a Paging Logical Volume to change paging00 so that it will not be active after a reboot.
    2. Reboot the RISC System/6000 by executing the shutdown -Fr command.
    3. When the system is up, login in as root and execute the fast path smitty rmps to get to the menu with the title Remove a Paging Space. Alternatively, you can go through the smit hierarchy by:
      1. Executing smitty.
      2. Selecting System Storage Management (Physical & Logical Storage).
      3. Selecting Logical Volume Manager.
      4. Selecting Paging Space.
      5. Selecting Remove a Paging Space to get to the same menu.
    4. Press the F4=List to generate a list of paging logical volumes.
    5. Use the Arrow keys to highlight the paging00 logical volume name, and then press the Enter=Do key three times (once to enter the name in the field, once to get the warning, and the third time to execute the command).
    6. When smit returns an OK prompt, press the F10=Exit to return to the command line.

    Common Disk Management and Error Recovery Procedures

    This section will show examples of the use of the migratepv and the rgrecover command and shell script respectively. We also include the contents of the dsksyn script that many people have used in AIX Version 3, although we did not test this script in AIX Version 4.

    For further examples of recovery procedures, see General Volume Group Recovery.

    How to Use the migratepv Command

    In this section we will look at how to migrate the contents of one physical volume to another physical volume within the same volume group.

    The example will use the volume group availvg which consists of two physical volumes hdisk0 and hdisk2, and the contents of the physical volume hdisk0. will be migrated to physical volume hdisk3.

    You will note that the physical volume names have changed for this volume group and do not match those listed in A Design Example for Improved Availability. The change has occurred as a result of running the mksysb restore example. See Managing Backup and Restore for more details.

    Also note that in our tests we could not successfully migrate all logical volumes in one step using the migratepv command, although this should have been possible. To work around this problem we used a variant of the migratepv command which allows migration of individual logical volumes. However, in the command line summary we have used another variant of the migratepv command which performs an entire physical volume migration.

    Command Line Summary

    1. Using the lspv command, check to see if there is a physical volume which is currently not assigned to a volume group:
      # lspv
      hdisk0 0000020158496d72 availvg
      hdisk1 00000201dc8b0b32 perfvg
      hdisk2 000002007bb618f5 availvg
      hdisk3 none None
      hdisk4 000137231982c0f2 stripevg
      hdisk5 00014732b1bd7f57 rootvg
      hdisk6 0001221800072440 stripevg
      hdisk7 00012218da42ba76 rootvg
      hdisk8 0002479088f5f347 perfvg

    2. Add physical volume hdisk3 to volume group availvg:
      extendvg -f 'availvg' 'hdisk3'

    3. Identify the logical volumes in volume group availvg:
      # lsvg -l availvg
      availvg:
      LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
      availlv jfs 6 12 2 open/syncd /availjfs
      loglv00 jfslog 1 2 2 open/syncd N/A

    4. Migrate the contents of hdisk0 to hdisk3:
      # migratepv 'hdisk0' 'hdisk3'

    5. To confirm that all physical partitions have been migrated, execute the lspv command on hdisk0 and hdisk3:
      # lspv -M hdisk0
      hdisk0:1-287
      #
      # lspv -M hdisk3
      hdisk3:1-81
      hdisk3:82 availlv:1:2
      hdisk3:83 availlv:2:2
      hdisk3:84 availlv:3:2
      hdisk3:85 availlv:4:2
      hdisk3:86 availlv:5:2
      hdisk3:87 availlv:6:2
      hdisk3:88 loglv00:1:2
      hdisk3:89-287

      The above results show that the migratepv command has moved the contents of hdisk0 to hdisk3.

    Detailed Guidance

    Let us now look at these steps in more detail, and see how a physical volume migration can be done using smit. We will also look at the commands which help us identify whether or not all physical partitions have been migrated.

    How to Migrate Physical Volume contents to another disk

    Before we start a physical volume migration to another disk, we need to confirm that the target physical volume has sufficient storage capability to hold all physical partitions which will be migrated. In this section we will look at the commands that provide this vital information.

    To reiterate, we will migrate physical volume hdisk0 to physical volume hdisk3.

    1. Identify all physical volumes which are currently not assigned to a volume group using the command:
      # lspv
      hdisk0 0000020158496d72 availvg
      hdisk1 00000201dc8b0b32 perfvg
      hdisk2 000002007bb618f5 availvg
      hdisk3 none None
      hdisk4 000137231982c0f2 stripevg
      hdisk5 00014732b1bd7f57 rootvg
      hdisk6 0001221800072440 stripevg
      hdisk7 00012218da42ba76 rootvg
      hdisk8 0002479088f5f347 perfvg

      Each line of the above output shows the name of a configured physical volume. If this physical volume belongs to an existing volume group, the line also shows its system-wide unique physical volume identifier and the name of the volume group to which it belongs.

      However, from the above information we note that physical volume hdisk3 does not currently belong any volume group, making it a candidate for the target disk for this example.

    2. Check the partition map for our source physical volume hdisk0:
      # lspv -M hdisk0
      hdisk0:1-81
      hdisk0:82 availlv:1:2
      hdisk0:83 availlv:2:2
      hdisk0:84 availlv:3:2
      hdisk0:85 availlv:4:2
      hdisk0:86 availlv:5:2
      hdisk0:87 availlv:6:2
      hdisk0:88 loglv00:1:2
      hdisk0:89-287

      Note that logical volumes availlv and loglv00 have their second logical partition copies allocated on this physical volume. The logical volume availlv has 6 physical partitions allocated on this physical volume, and logical volume loglv00 has 1 physical partition.

    3. Using the lsdev command, let us look at the size of physical volumes hdisk0 and hdisk3:
      # lsdev -Cc disk
      hdisk0 Available 00-07-00-0,0 1.2 GB SCSI Disk Drive (in 2.4 GB Disk Unit
      hdisk1 Available 00-07-00-1,0 1.2 GB SCSI Disk Drive (in 2.4 GB Disk Unit
      hdisk2 Available 00-07-00-2,0 1.2 GB SCSI Disk Drive (in 2.4 GB Disk Unit
      hdisk3 Available 00-07-00-3,0 1.2 GB SCSI Disk Drive (in 2.4 GB Disk Unit
      hdisk4 Available 00-07-00-4,0 857 MB SCSI Disk Drive
      hdisk5 Available 00-08-00-0,0 670 MB SCSI Disk Drive
      hdisk6 Available 00-08-00-1,0 670 MB SCSI Disk Drive
      hdisk7 Available 00-08-00-2,0 355 MB SCSI Disk Drive
      hdisk8 Available 00-08-00-3,0 320 MB SCSI Disk Drive

      Since hdisk0 and hdisk3 are of identical size, there should be no problems in performing the physical volume migration.

    4. Add physical volume hdisk3 to the volume group availvg using the command smitty extendvg.

      The following screen will appear:


                          Add a Physical Volume to a Volume Group

      Type or select values in entry fields.
      Press Enter AFTER making all desired changes.

      [Entry Fields]
      * VOLUME GROUP name [availvg] +
      * PHYSICAL VOLUME names [hdisk3] +





      F1=Help F2=Refresh F3=Cancel F4=List
      F5=Reset F6=Command F7=Edit F8=Image

      On this smit screen:

      1. Enter availvg for the field VOLUME GROUP name.
      2. Enter hdisk3 for the field PHYSICAL VOLUME names.
      3. Press Enter.
      4. Press F10 after smit returns with OK.
      5. Confirm that we now have three physical volumes in volume group availvg using the command:
        # lsvg -p availvg
        availvg:
        PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
        hdisk0 active 287 280 58..50..57..57..58
        hdisk2 active 287 280 58..50..57..57..58
        hdisk3 active 287 287 58..57..57..57..58

        From the above output we can see that three physical volumes now exist in availvg, and as expected all physical partitions on hdisk3 are free.


        **** Note **** It is not a requirement that a new target physical volume is added to the volume group. For a migratepv to succeed it is only necessary that the target physical volume has a sufficient number of free physical partitions equal to or greater than the number of partitions being moved from the source physical volume.
      We are now ready to perform the migration test. However, since the migratepv command will still allow access to the data being migrated, we will simulate this by executing the following shell script, called migpvtst during the migration process.
      #!/bin/ksh
      # migpvtst
      cd /availjfs
      while true
      do
      ls
      sleep 1
      done

    5. Execute migpvtst from another terminal:
      # ksh migpvtst
      cmds.rom.dd
      cmds.rom.dd
      cmds.rom.dd

      The above sample output shows that the file cmds.rom.dd was displayed once every second.
    6. Migrate the contents of physical volume hdisk0 to hdisk3 using the command smitty migratepv:
                         Move Contents of a Physical Volume

      Type or select a value for the entry field.
      Press Enter AFTER making all desired changes.

      [Entry Fields]
      * SOURCE physical volume name [hdisk0] +








      F1=Help F2=Refresh F3=Cancel F4=List
      F5=Reset F6=Command F7=Edit F8=Image
      F9=Shell F10=Exit Enter=Do

      1. On this smit screen enter hdisk0 for the field SOURCE physical volume name.
      2. Press Enter.

        On the next smit screen, shown below:

      3. Enter hdisk3 for the field DESTINATION physical volumes.
      4. Press Enter.
      5. Press F10 when smit return with OK.

                       Move Contents of a Physical Volume

      Type or select values in entry fields.
      Press Enter AFTER making all desired changes.

      [Entry Fields]
      * SOURCE physical volume name hdisk0
      * DESTINATION physical volumes [hdisk3] +
      Move only data belonging to this [] +
      LOGICAL VOLUME?







      F1=Help F2=Refresh F3=Cancel F4=List
      F5=Reset F6=Command F7=Edit F8=Image
      F9=Shell F10=Exit Enter=Do


      **** Warning **** If smit returns with OK and also displays error messages like:
      0516-158 lmigratepp: Destination physical partition number not
              entered.
      Usage: lmigratepp -g VGid -p SourcePVid -n SourcePPnumber
              -P DestinationPVid -N DestinationPPnumber
      0516-812 migratepv: Warning, migratepv did not completely
              succeed; all physical partitions have not been
              moved off the PV.
      
      this indicates that the migration has failed. When this happens, rerun the command for each logical volume that exists on the source physical volume. The command lspv -l diskname will identify all logical volumes contained on the physical volume specified by the diskname parameter. Alternatively, press F4 on the field Move only data belonging to this LOGICAL VOLUME? and select a logical volume from the displayed list.
    7. Confirm that there are no physical partitions allocated on hdisk0 by excuting the command:
      # lspv -M hdisk0
      hdisk0:1-287

      As expected all 287 physical partitions are now free on hdisk0.

    8. Confirm that seven physical partitions now exist on hdisk3 using the command:
      # lspv -M hdisk3
      hdisk3:1-81
      hdisk3:82 availlv:1:2
      hdisk3:83 availlv:2:2
      hdisk3:84 availlv:3:2
      hdisk3:85 availlv:4:2
      hdisk3:86 availlv:5:2
      hdisk3:87 availlv:6:2
      hdisk3:88 loglv00:1:2
      hdisk3:89-287

    9. Press Ctrl-C to stop the shell script migpvtst from running.

    Note that the shell script migpvtst still continued to list the files in the directory /availjfs both during and after the migration of physical volume hdisk0. The above steps show us that a migration of the contents of one physical volume to another can be easily performed, and furthermore, without denying users access to data residing on the source physical volume.

    How to Use the rvgrecover Shell Script

    There are many references that you need to need to help you resolve problems quickly. These include the article Recovering from Disk Drive Problems in AIX Version 4.1 System Management Guide: Operating System and Devices, and also the article Recovering Volume Groups in the AIX Version 4.1 Problem Solving Guide and Reference. This latter article includes the following script called rvgrecover:


    PV=/dev/ipldevice
    VG=rootvg
    cp /etc/objrepos/CuAt /etc/objrepos/CuAt.$$
    cp /etc/objrepos/CuDep /etc/objrepos/CuDep.$$
    cp /etc/objrepos/CuDv /etc/objrepos/CuDv.$$
    cp /etc/objrepos/CuDvDr /etc/objrepos/CuDvDr.$$
    lqueryvg -Lp $PV | awk '{ print $2 }' | while read LVname; do
    odmdelete -q "name = $LVname" -o CuAt
    odmdelete -q "name = $LVname" -o CuDv
    odmdelete -q "value3 = $LVname" -o CuDvDr
    done
    odmdelete -q "name = $VG" -o CuAt
    odmdelete -q "parent = $VG" -o CuDv
    odmdelete -q "name = $VG" -o CuDv
    odmdelete -q "name = $VG" -o CuDep
    odmdelete -q "dependency = $VG" -o CuDep
    odmdelete -q "value1 = 10" -o CuDvDr
    odmdelete -q "value3 = $VG" -o CuDvDr
    importvg -y $VG $PV # ignore lvaryoffvg errors
    varyonvg $VG

    To test this script, note that we start with a system whose ODM is fine as indicated by:


    # lsdev -Cc disk
    hdisk0 Available 00-08-00-0,0 670 MB SCSI Disk Drive
    hdisk1 Available 00-08-00-1,0 355 MB SCSI Disk Drive
    hdisk2 Available 00-08-00-2,0 355 MB SCSI Disk Drive
    # lsvg rootvg
    VOLUME GROUP: rootvg VG IDENTIFIER: 000005083df45081
    VG STATE: active PP SIZE: 4 megabyte(s)
    VG PERMISSION: read/write TOTAL PPs: 243 (972 megabytes)
    MAX LVs: 256 FREE PPs: 5 (20 megabytes)
    LVs: 16 USED PPs: 238 (952 megabytes)
    OPEN LVs: 11 QUORUM: 2
    TOTAL PVs: 2 VG DESCRIPTORS: 3
    STALE PVs: 0 STALE PPs 0
    ACTIVE PVs: 2 AUTO ON: yes

    To simulate a corrupt ODM, we can execute the following commands:


    **** Warning - DO NOT DO THIS ****

    You must use a test machine to do this process, since if you have any problems, you may have to reinstall.



    # odmdelete -o CuAt -q'name=rootvg'
    0518-307 odmdelete: 3 objects deleted.
    # lsvg rootvg
    0516-310 lsvg: Unable to find attribute rootvg in the Device
    Configuration Database. Execute synclvodm to attempt to
    correct the database.
    #
    # odmdelete -o CuAt -q'name=hd3'
    0518-307 odmdelete: 4 objects deleted.
    # odmdelete -o CuAt -q'name=hd5'
    0518-307 odmdelete: 5 objects deleted.
    # lslv hd3
    LOGICAL VOLUME: hd3 VOLUME GROUP: rootvg
    LV IDENTIFIER: PERMISSION: ?
    VG STATE: inactive LV STATE: ?
    TYPE: jfs WRITE VERIFY: ?
    MAX LPs: ? PP SIZE: ?
    COPIES: 1 SCHED POLICY: ?
    LPs: ? PPs: ?
    STALE PPs: ? BB POLICY: ?
    INTER-POLICY: minimum RELOCATABLE: yes
    INTRA-POLICY: middle UPPER BOUND: 32
    MOUNT POINT: /tmp LABEL: None
    MIRROR WRITE CONSISTENCY: ?
    EACH LP COPY ON A SEPARATE PV ?: yes

    Now we can execute the rvgrecover shell script. During the execution, you may see messages on your screen like:


    0518-307 odmdelete: 1 objects deleted.
    0518-307 odmdelete: 0 objects deleted.
    0516-510 updatevg: Physical volume not found for physical volume
    identifier 00000997c020352d.
    0516-548 synclvodm: Partially successful with updating volume
    group rootvg.
    0516-782 importvg: Partially successful importing of /dev/ipldevice.

    We can then check that whether the rootvg has been recovered by executing:


    # lsvg rootvg
    VOLUME GROUP: rootvg VG IDENTIFIER: 000005083df45081
    VG STATE: active PP SIZE: 4 megabyte(s)
    VG PERMISSION: read/write TOTAL PPs: 243 (972 megabytes)
    MAX LVs: 256 FREE PPs: 5 (20 megabytes)
    LVs: 16 USED PPs: 238 (952 megabytes)
    OPEN LVs: 11 QUORUM: 2
    TOTAL PVs: 2 VG DESCRIPTORS: 3
    STALE PVs: 0 STALE PPs 0
    ACTIVE PVs: 2 AUTO ON: yes
    # lslv hd3
    0516-304 lslv: Unable to find device id 00000997c020352d in the Device
    Configuration Database.
    LOGICAL VOLUME: hd3 VOLUME GROUP: rootvg
    LV IDENTIFIER: 000005083df45081.9 PERMISSION: read/write
    VG STATE: active/complete LV STATE: opened/syncd
    TYPE: jfs WRITE VERIFY: off
    MAX LPs: 128 PP SIZE: 4 megabyte(s)
    COPIES: 1 SCHED POLICY: parallel
    LPs: 82 PPs: 82
    STALE PPs: 0 BB POLICY: relocatable
    INTER-POLICY: minimum RELOCATABLE: yes
    INTRA-POLICY: center UPPER BOUND: 32
    MOUNT POINT: /tmp LABEL: /tmp
    MIRROR WRITE CONSISTENCY: on
    EACH LP COPY ON A SEPARATE PV ?: yes

    It seems that there is still an ODM problem as indicated by the 0516-304 error message. After the initial invocation of the rvgrecover script, only some objects in the ODM database (CuAt) were recovered. The physical volume information for hdisk0 has not immediately been recovered by this script. When we reboot the RISC System/6000, we find that the PVid for hdisk0 is recovered from the VGDA on one of the rootvg disks. However, as can be seen in the following, hdisk0 is still not included as part of the rootvg volume group, since its status is none:


    # lspv
    hdisk0 00000997c020352d none
    hdisk1 00000997c01fd413 rootvg
    hdisk2 000010732623885a 325vg

    However, we can repeat the execution of the rvgrecover script after this reboot, and then we find that the ODM information for physical volume hdisk0 is updated correctly. This can be seen from:


    # lspv
    hdisk0 00000997c020352d rootvg
    hdisk1 00000997c01fd413 rootvg
    hdisk2 000010732623885a 325vg

    How to Use the dsksync Shell Script

    This shell script will synchronize your disks on a AIX Version 3 system so they will be named in the correct order. The order may differ from that expected from the configuration rules as physical volumes and adapters are added and removed over a period of time to your system. For example: hdisk0, hdisk2, hdisk3 instead of hdisk0, hdisk1, hdisk2. The order of the disk names generally does not cause errors, but it may cause confusion for the user. Run the following dsksync script to alleviate such confusion. The script will rename the hard disks.

    You may need to use a shell script similar to that given in in How to Use the rvgrecover Shell Script after you run this script. Make sure the key is in the Normal position before running this script.


           lsdev -Cc disk | awk '{ print $1 }' | while read HDname; do
    odmdelete -q "name = $HDname" -o CuAt
    odmdelete -q "value = $HDname" -o CuAt
    odmdelete -q "name = $HDname" -o CuDep
    odmdelete -q "name = $HDname" -o CuDv
    odmdelete -q "value3 = $HDname" -o CuDvDr
    odmdelete -q "name = $HDname" -o CuVPD
    done
    rm -f /dev/hdisk*
    rm -f /dev/rhdisk*

    savebase

    When the shell script completes successfully, run the shutdown -Fr command to shutdown and reboot AIX Version 3.